id
stringlengths
1
6
url
stringlengths
16
1.82k
content
stringlengths
37
9.64M
188800
https://www.quora.com/Which-group-has-higher-ionization-energy-values-alkali-metals-or-alkaline-earth-metals
Which group has higher ionization energy values: alkali metals or alkaline earth metals? - Quora Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In Chemistry Ionization Energy Main-group Elements Alkaline Earth Metals Periodic Table of the Ele... Atomic Properties Classification of Element... Chemical Elements Physical Chemistry 5 Which group has higher ionization energy values: alkali metals or alkaline earth metals? All related (33) Sort Recommended Sushant Nair Lives in Thane, Maharashtra, India (2012–present) · Author has 124 answers and 683.2K answer views ·7y In order to understand this, let's talk about Ionization Energy (or Ionization Potential). Ionization Energy is the amount of energy required to ionize an atom i.e., to make an atom to lose an electron which means that, lower the Ionization Energy, higher is the tendency of the atom to lose an electron. Alkali metal atoms have a higher tendency to lose electrons than Alkaline Earth Metal atoms. In other words, Alkali metals have lower Ionization Energy than Alkaline Earth Metals. So, Alkaline Earth Metals have higher Ionization Energy. Upvote · 9 8 Sponsored by Grammarly Stuck on the blinking cursor? Move your great ideas to polished drafts without the guesswork. Try Grammarly today! Download 99 34 Related questions More answers below Why is the second ionization energy of alkali metals very high, whereas the first ionization energy is very low? Why is the first ionization energy of alkaline earth metal much higher than that of alkali metals of the same period? Which group has higher electronegativity and ionization energy, Alkali metals or alkaline earth metals? What is trend of ionization energy of alkaline metal in period? What is the lowest ionization energy between alkaline earth metal and non-metal? Guy Clentsmith Studied Chemistry at The University of British Columbia · Author has 9.7K answers and 1.4M answer views ·1y Originally Answered: Which group of elements has the highest ionization energy: alkali metals, alkaline earth metals, or transition elements? · As we face a Period, that is a ROW of the Periodic Table, ionization energy increases ACROSS the row from LEFT to RIGHT… And the Noble Gases have the highest ionization energy of their respective Periods, and the alkali metals express the lowest such energy…. As always we interrogate the oxidation reaction … Atom(g)+Δ ionization energy→Atom(g)++e−Atom(g)+Δ ionization energy→Atom(g)++e− Upvote · Klaus Belsner Former Research Scientist at University of Ulm (1993–2004) · Author has 266 answers and 283K answer views ·6y Related Why is the first ionization energy of alkaline earth metal much higher than that of alkali metals of the same period? An explanation can be derived from the shielding factor of the electrons surrounding the positive nucleus (Slater 1951). The heavier alkali metals have a bigger cloud of electrons, so the electron with the highest distance, which is the electron, which is removed upon ionization, experiences a weaker field, than the electron i.e. Lithium. However don’t forget, that in principle Hydrogen is also some sort of alkali, because it renders electrons. There this effect is even more pronounced. Hydrogen is only not seen as an alkaline element, because the hydrogen ion reacts with hydroxide to water, w Continue Reading An explanation can be derived from the shielding factor of the electrons surrounding the positive nucleus (Slater 1951). The heavier alkali metals have a bigger cloud of electrons, so the electron with the highest distance, which is the electron, which is removed upon ionization, experiences a weaker field, than the electron i.e. Lithium. However don’t forget, that in principle Hydrogen is also some sort of alkali, because it renders electrons. There this effect is even more pronounced. Hydrogen is only not seen as an alkaline element, because the hydrogen ion reacts with hydroxide to water, which is the opposite of an alkaline, but from the viewpoint of ionization it behaves like the alkaline elements. Upvote · Michael Mombourquette Retired Chemistry Prof, Church member, Knight of Columbus, · Author has 6.8K answers and 17.7M answer views ·8y Related Why do alkali metals have low first ionization energy? Alkali metals have only one valence electron. The rest are all core electrons. The net charge on the core is a good estimate of what that line outer electron feels. Take Li for example. It has three electrons and two of them are in the 1s orbital the one valence electron is in the 2s orbital, which is much bigger. That one electrons feels the attraction of the three protons (+3) in the nucleus but the repulsion of the two electrons (-2) in the core for a net of only positive (+1) core charge. Move over to Be, with 4 electrons and protons. Two of the electrons are still core so now the core cha Continue Reading Alkali metals have only one valence electron. The rest are all core electrons. The net charge on the core is a good estimate of what that line outer electron feels. Take Li for example. It has three electrons and two of them are in the 1s orbital the one valence electron is in the 2s orbital, which is much bigger. That one electrons feels the attraction of the three protons (+3) in the nucleus but the repulsion of the two electrons (-2) in the core for a net of only positive (+1) core charge. Move over to Be, with 4 electrons and protons. Two of the electrons are still core so now the core charge is +4–2=+2. That is twice the core charge for the Be electrons but they are approximately the same distance from the nucleus as the one electron in Li so it's harder to pull the electron off Be than it is off Li. Upvote · 9 3 Sponsored by Avnet Silica We're at the Pulse of the Market. Explore the trends shaping real innovation in AI, automotive & ADAS, 5G, renewables, power, and more. Learn More 9 5 Related questions More answers below Why are alkaline earth metals more reactive than other metals? What are the similarities and differences between the elements in group 1 (alkali metals) and group 2 (alkaline earth metals)? What is the PHYSICAL difference between alkaline earth metals and alkali metals? Why compounds of alkaline earth metals are less ionic than alkali metals? Why does the ionization energy of alkaline earth metals decrease down the group? Assistant Bot · 1y Alkaline earth metals have higher ionization energy values compared to alkali metals. Explanation: Alkali Metals (Group 1) have one valence electron, which they lose easily to form cations. This results in lower ionization energies. Alkaline Earth Metals (Group 2) have two valence electrons. While they also lose electrons to form cations, the presence of an additional electron in their outer shell results in a stronger effective nuclear charge, leading to higher ionization energies compared to alkali metals. As you move down the groups, the ionization energy decreases for both groups, but alkalin Continue Reading Alkaline earth metals have higher ionization energy values compared to alkali metals. Explanation: Alkali Metals (Group 1) have one valence electron, which they lose easily to form cations. This results in lower ionization energies. Alkaline Earth Metals (Group 2) have two valence electrons. While they also lose electrons to form cations, the presence of an additional electron in their outer shell results in a stronger effective nuclear charge, leading to higher ionization energies compared to alkali metals. As you move down the groups, the ionization energy decreases for both groups, but alkaline earth metals consistently have higher values than alkali metals across the periodic table. Upvote · S.Sreeram Author has 1.3K answers and 4.9M answer views ·Updated 6y Related What is the difference between alkaline metals and alkaline earth metals? Thank you for A2A What are Alkali Metals? The alkali metals are the elements found in the first group of the periodic table. They are Lithium (Li), sodium(Na), potassium (K), Rubidium (Rb), Cesium (Cs) and Francium (Fr). They are all metals and very reactive and none of these metals do not occur as free metals in nature. Alkali metals are always stored in inert liquids such as kerosene because they rapidly react with the air, water vapor and oxygen in the air. Sometimes they explosively react with other substances. They can achieve the noble gas state easily, by removing the outermost electron i Continue Reading Thank you for A2A What are Alkali Metals? The alkali metals are the elements found in the first group of the periodic table. They are Lithium (Li), sodium(Na), potassium (K), Rubidium (Rb), Cesium (Cs) and Francium (Fr). They are all metals and very reactive and none of these metals do not occur as free metals in nature. Alkali metals are always stored in inert liquids such as kerosene because they rapidly react with the air, water vapor and oxygen in the air. Sometimes they explosively react with other substances. They can achieve the noble gas state easily, by removing the outermost electron in the valence shell. The densities of Lithium and Sodium are less than the density of water. However, the other elements are denser than water. Many of alkali metal compounds (NaCl, KCl, Na2, CO3, NaOH) are commercially very important. please see the below pictorial diagram about alkali metals: What are Alkaline Earth Metals? Alkaline earth metals are found in the second group of the periodic table. Group II elements include; Beryllium (Be), Magnesium (Mg), Calcium (Ca), Strontium (Sr), Barium (Ba) and Radium (Ra). Similar to alkaline metals, these elements also do not occur freely in nature and they are also very reactive. All the elements in this group are denser than water. Pure metals are silver-grey colored, but they tend to decolorize quickly when they are exposed to air because they form an oxide layer on the surface. Same as alkali metals, these metals are also good conductors in heat and electricity. All of the alkaline earth metals are commercially valuable. please see the below pictorial diagram about alkali metals: What is the difference between Alkali Metals and Alkaline Earth Metals? Electron configuration: Alkali metals have the electronic configuration of [Noble gas] ns1 and alkaline earth metals have, [Noble gas] ns2 electronic configuration. valance: All the alkali metals have an electron in their outermost shell and all the alkaline earth metals have two outer electrons. To achieve the noble gas configuration, alkali metals need to lose one electron (valence is “one”), whereas alkaline earth metals need to remove two electrons (valence is “two”). Reactivity: Both alkali metals and alkaline metals are very reactive. Alkali metals are more reactive than alkaline earth metals. Ionic charge: Alkali metals have +1 ionic charge in their compounds and alkaline earth metals have +2 ionic charge in compounds. Hardness: Alkali metals are very soft and they can be cut with a sharp knife. Alkali earth metals are harder than alkali metals. Thank you for 28K+ views. Upvote · 999 120 9 6 9 2 Rishab Sriram Lives in Chennai, Tamil Nadu, India · Author has 208 answers and 116.5K answer views ·4y Related Which is more electropositive, alkalietal or alkali earth metal? Electropositivity is primary a metallic attribute, it is dependent on the metallic character of the element. So, alkali metals are more electropositive Upvote · Sponsored by All Out Kill Dengue, Malaria and Chikungunya with New 30% Faster All Out. Chance Mat Lo, Naya All Out Lo - Recommended by Indian Medical Association. Shop Now 999 616 Charlie Franklin PhD in Chemistry, Harvard University · Author has 470 answers and 2.3M answer views ·8y Related Which group forms more ionic compounds - alkali metals or alkaline earth metals? An ionic bond is the strong electrostatic force of attraction between oppositely charged ions (an anion and a cation) of elements with significantly different electronegativities, thus they often form between a metal atom and a non metal atom. The alkaline earth metals (group 2) have a higher charge and a smaller atomic radius than their group 1 (alkali metal) counterparts, thus the charge density of a group 2 metal is higher than that of a group 1 metal in the same period. This means that the group 2 metal has a greater polarising power, so will distort the electron cloud of the anion in the i Continue Reading An ionic bond is the strong electrostatic force of attraction between oppositely charged ions (an anion and a cation) of elements with significantly different electronegativities, thus they often form between a metal atom and a non metal atom. The alkaline earth metals (group 2) have a higher charge and a smaller atomic radius than their group 1 (alkali metal) counterparts, thus the charge density of a group 2 metal is higher than that of a group 1 metal in the same period. This means that the group 2 metal has a greater polarising power, so will distort the electron cloud of the anion in the ionic bond to a greater extent. This means that the anion is polarised to a greater extent, causing the electrons to be drawn further towards the cation (away from the anion) thus the bond is more polar covalent (than ionic) as the dipole moment/separation of charge of the bond is reduced and the electrons are closer to the middle of the bond. This means that the bond is stronger and requires more energy to break down (thus the lattice enthalpy will be more exothermic). However, it also means that the compound has more covalent character and is technically less ionic, which can also be explained by the fact that the difference in electronegativity between a group 2 metal and a particular anion is less than difference in electronegativity between a group 1 metal (in the same period) and the same anion. Therefore the ionic compound formed from an alkali metal and a particular anion is more ionic than the ionic compound formed between the corresponding alkaline earth metal and the same anion. Upvote · 99 16 Chem Infusion M.Sc( applied chemistry) in Chemistry&Computational Chemistry, Centurion University of Technology and Management (Graduated 2018) ·6y Related Why does the ionization energy of alkaline earth metals decrease down the group? THAT IS BEACUSE AS YOU MOVE DOWN THE GROUP PRINCIPAL QUANYUM NO INCREASES THAT MEANS NEW SHELLS ARE ADDED … AS NEW SHELLS ARE ADDED HENCE THE DISTANCE BETWEEN THE OUTER MOST SHELL AND THE NUCLEUS INCREASES HENCE THE EFFECTIVE NUCLEAR CHARGE ALSO DECREASES HENCE, ITS BECOME EASIER TO TAKE OUT THE ELECTRON FROM THE OUTER MOST SHELL … THAT MEANS SMALL ENERGY IS REQUIRED TO REMOVE THE ELECTRON FROM OUTER MOST SHELL .. Upvote · 9 3 9 1 Sponsored by LPU Online Career Ka Turning Point with LPU Online. 100% Online UGC-Entitled programs with LIVE classes, recorded content & placement support. Apply Now 999 259 S.Siddharth Lives in Chennai, Tamil Nadu, India · Author has 323 answers and 228.8K answer views ·5y Related How do you compare alkali metal to alkaline earth metal? Continue Reading Upvote · 9 6 Guy Clentsmith Chemistry tutor... at Self-Employment (2018–present) · Author has 26.5K answers and 19.7M answer views ·4y Related What is trend of ionization energy of alkaline metal in period? I think you mean the trend of ionization energies of the alkali metals DOWN the Group. i.e. a vertical column of the Periodic Table… Now we assess the reaction…. M(g)+Δ ionization energy⟶M(g)++e−M(g)+Δ ionization energy⟶M(g)++e− Now as we descend the Group the valence electron becomes FARTHER removed from the nuclear core … and while the nucleus does become more positive, the electrostatic attraction follows an inverse square law … and thus the alkali metal should be more EASILY OXIDIZED.. And you should see if you can find some data to express the trend… Certainly, sodium is more reactive Continue Reading I think you mean the trend of ionization energies of the alkali metals DOWN the Group. i.e. a vertical column of the Periodic Table… Now we assess the reaction…. M(g)+Δ ionization energy⟶M(g)++e−M(g)+Δ ionization energy⟶M(g)++e− Now as we descend the Group the valence electron becomes FARTHER removed from the nuclear core … and while the nucleus does become more positive, the electrostatic attraction follows an inverse square law … and thus the alkali metal should be more EASILY OXIDIZED.. And you should see if you can find some data to express the trend… Certainly, sodium is more reactive than lithium … and potassium is more reactive than sodium, and this reflects the trend in ionization energy… Upvote · 9 1 Daniel Iyamuremye Former Senior Lecturer (Retired) (2000–2018) · Author has 12.1K answers and 2M answer views ·4y Related What is trend of ionization energy of alkaline metal in period? In alkaline metals and even in alkaline earth metals, ionization energy decreases down the Group. This is explained by the fact that as you go down the Group, the atomic volume increases. The valence electrons are shielded from the nucleus attraction by more and more inner electronic shells. Hence the ionization energy required to remove electrons decreases down the Group. Upvote · 9 1 9 1 Randy Liu Master from Columbia University (Graduated 2000) · Author has 540 answers and 279.2K answer views ·2y Related From which group do transition metals, alkali metals, and alkaline earth metals belong? Alkali metals are group I: Li, Na, K, Rb, Sc, Fr Alkaline earth metals are group II: Be, Mg, Ca, Sr, Ba, Ra Transition metals are all elements in group III~XII, from Sc to Pu (any elements heavier then Pu does not exist in the nature, as we are aware). What differentiate transition metals and other metals in group I, II, 13~15, is that transition metals have “d sub-orbitals” that can participate in chemical reaction. Whereas in other groups, chemical reactions only deal with electrons in s and p orbitals. To be more specific, the row number n in periodic table is called “period n”, for each elemen Continue Reading Alkali metals are group I: Li, Na, K, Rb, Sc, Fr Alkaline earth metals are group II: Be, Mg, Ca, Sr, Ba, Ra Transition metals are all elements in group III~XII, from Sc to Pu (any elements heavier then Pu does not exist in the nature, as we are aware). What differentiate transition metals and other metals in group I, II, 13~15, is that transition metals have “d sub-orbitals” that can participate in chemical reaction. Whereas in other groups, chemical reactions only deal with electrons in s and p orbitals. To be more specific, the row number n in periodic table is called “period n”, for each element in period n, its “outmost principal orbital” is divided into ns(max 2 electrons) and np(max 6 electrons) sub-orbitals, all elements other than transition metals, can engage in a chemical reaction with only their electrons in ns and np sub-orbitals. But transition metals can have their electrons in (n-1)d sub-orbitals participate in reaction. Upvote · 9 1 Related questions Why is the second ionization energy of alkali metals very high, whereas the first ionization energy is very low? Why is the first ionization energy of alkaline earth metal much higher than that of alkali metals of the same period? Which group has higher electronegativity and ionization energy, Alkali metals or alkaline earth metals? What is trend of ionization energy of alkaline metal in period? What is the lowest ionization energy between alkaline earth metal and non-metal? Why are alkaline earth metals more reactive than other metals? What are the similarities and differences between the elements in group 1 (alkali metals) and group 2 (alkaline earth metals)? What is the PHYSICAL difference between alkaline earth metals and alkali metals? Why compounds of alkaline earth metals are less ionic than alkali metals? Why does the ionization energy of alkaline earth metals decrease down the group? What is the difference between alkaline earth metals and alkali metals in terms of their structure and properties? Why are alkaline earth metals more reactive than alkali metals? Which has greater lattice enthalpy, alkali metals or alkaline Earth metals? Why are alkali metals more reactive than non-alkaline metals, if they have higher first ionization energy values? What is the difference between alkaline metals and alkaline earth metals? Which one is more reactive: alkali metals, alkaline earth metals, transition metals or post-transition metals? Related questions Why is the second ionization energy of alkali metals very high, whereas the first ionization energy is very low? Why is the first ionization energy of alkaline earth metal much higher than that of alkali metals of the same period? Which group has higher electronegativity and ionization energy, Alkali metals or alkaline earth metals? What is trend of ionization energy of alkaline metal in period? What is the lowest ionization energy between alkaline earth metal and non-metal? Why are alkaline earth metals more reactive than other metals? Advertisement About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025 Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. More information Allow All Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Performance Cookies Always Active These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Functional Cookies Always Active These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Targeting Cookies Always Active These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising. Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Reject All Confirm My Choices
188801
https://www.misd.net/languageart/grammarinaction/501sentencecompquestions.pdf
üPETWËåøe]�R‰'57ët7Â'M&̃ƒ©©ðûÁÁ+ ÃèhZÚž=è‹´´²2øBjje%â²Z›š§Å‚=JM&Lh¢ÈóèQÔëÑÿÁ ^Oñó<å›ç�Ói0üõt6Áž £ñï¿‘³y) :tË-ˆ/Á„ ‰>ùg]�o~ð�B�þ8zT&Cœ<™”„½ :¬Z ôú¼<ô…ÅR\Œzø?çeÔä¹Åqß7d!&lA„@Ø+Ù¡ ¦‚²NG¡DB¥A T‘%,‚ȰŒU$„‚ ‹!ÃVV ÒÐi œ~é—Úé‡a¦÷Ì3¼ôb�›/¿yÞL2ïÿ<çüÏ9‡…KKˆÃ⢥%ìO++l6äÃË—]] ÏÜ|pî�É|ö Åò`sÓÈõ##Èÿµ5î}e…ÁýÏž 8º§¦˜LÐ5>nj CçÑQtÞ‹7oB5ïP�Ýu²;/´¾©P‰0tw“HP• {Ãà ¡!ôÏÉI&S$‚<°´LH€8p8çÏcØÒ’µ5Ì/««VVàµkk––Ð/ú‰Í ?47ñhj }⾿¾ÎdÂý//3™è¾ÍÍ‘n 8FFLL ³¦¦,,Ž Á°§Ommccqüùs'§ôt Yqp()Á°W¯ är ÛØ°³ƒ>ñË/ÖÖ0¡nlp8ÐøÁÊ êav–Øš²±_P«­­‘^[[¤±«ËΞ߹coloGlksp€ï÷bc#�z´T(<=?, hÞ’ËQ½twëëC}ô÷3ÐGGGÍÌÀ'¦¦¬¬|}1l~ÞÁ!"Ç—– LK#VW��‹ŠpüÕ+—† ùÒÅúÄò²³3ÌMóóNN?þ÷îè›ÝȈ£#ÌK�;:BŸüæ''Ð×Ñáì liqq=MMˆJ¥‹ $’Zíïò¤®î·ß ii4ÚÝ»��¥¥tz{{PP}=�÷ÑÓC¥64|üñØ™|ýº@°´D"ÕÖ ››Be¥@ðÇ “ÏßÚ"rs½¼4áÂoo�FG';ÛÇG£!‘´ÌÌäñᬣ³;;aÊ�|Cìè8vì]Þ¾}ôè»ñÙɘÁ�üý�Š×wß P³³ÁÁ°— …ff8~ÿ¾PøÑGdò�;B¡HD§75‰DŸ}fPWrá‚¡ammhhuµ¡aeeHH{»¾~yyh¨ZM§—– ?¾¸¨«[Túú5‰”—òö-¼àÖ‰tö¬ŸŸFC¥J$|¾FC§'%©T-Åbÿá½{°AßûovwGD»ºÂÂa:Þ¡6N½½(~ÃÃ'Ogg££�££QQà›}}‘‘ †µ¶FG;:’É7nDFòùzzQQ””DF¦¦ššDG3™—.ÅÄ(ÆÆ_}§RdgÇÅÍÎÒhYY±±?ÿL¡H¥qqoßR(IIaa[[û÷GGj4úúQQ�FC£EDéôÝ R©{±·÷ÓO‘žÿ�§O§¦{_‹� $&Â\ÑÑ!bX}½XlkK&——‹Å<žž^~~BBh¨‰IVV||R’¥ezzBBn®…ŹsbñÍ›LfJÊ™3½½ ÆçŸK$Ožèë'%I$°;&&&'ÿö…—�ð×_ºº"Qxøö¶žÞÑ£áá ‘BÑ28øÄ‰ííç£G‰‰ðÞZªTH‡Juæ °¿¶DЛœü.‡†RR€ccR)ÒŸ–†ôŸ;‡ê,%|B.OK30_>ÞÆ†H”É23½¼h4©4=ý“OŒ�%’´´Ó§Y,±855;›ÅŠ�ÏȨ®f2O�ÊÈèî62Š�ÍÌ ¦Ócc³²^¼ÐÕ�ŒÌÌüõW"1,LýóO � )I£!“Äb�†H HL¿8r$1q{û]ÆÇ£çÿ›Z=»9<œšŠò;= Õ=lÐϳ³Ñý_¼ˆâ’™ |ø±³Q¡ÈÉ�|¨©¹t &ÏÒÒ‚6|L&ssÓÕ•H õõcc/^Œ‰16ŽŠÊÍ•J��ÃÃóò®\14'§¥¥¨H£ÉÎ>}úÀŽËÌloGä׬ikknflõê³g/]bÌlnm�œd,=½© s¦Ù\Wwó&c+Wž8šÍ55Ø?ÌæcÇÀ•+««éùÑ£w£šg•n÷Ñ£ ßßÐ@uA ®«{zNž„¾3gHgmm}=h±45áóâb¼5ô·¶‚/¾øÃð‰üüóç�M›:;�ììÎNÌÝ]]˜T32º»_yz{{Ñ¡fs_ß—_2–šjµâ?›LVkw7cF£Õj·3&Ë/ú|ŒIÒ… ׯ3¦×Ÿ;‡=L¯?sû‡,75�’ÔØˆùTω8;{êÞWe(Dïï÷cûclr²­�ü‘ØÙÙÖÝ ÄŠŠöv>=͘F32‰0ÆóØK9®·¾Áq==Ä®.�ç;:nÜXžW¯^¼ˆ÷T÷öR úûA§Ój…ÞááÁA°½�ÎÕÕ—.�Z­øÞ;ïàí+, óóGFÀ­[m60+‹Î+VŒ�£>dÙnÇ}ªÕNNbsÅÉIL®¢85…›[œN8Ï;� ôÌÌP¦Üîž Ðë¥_ôxææÈÁ± ƒ±Ÿâ�(ƒt¾|™Î‰¹¸84=Ãa› ôùìvÐét8Àþ~:74Œ��‡£:{ë­‰ Ò�®e¬ €øÄSSNqÕdúqûA§Ç“ž}^/6Y0?Ÿ¸w/éÜ¿Ÿ• Ëﯮ�s爔©PÈé¤ïÁ/s¹ˆNg(DMç‰ :'æÂéP‰¸\ô«^/ôŽ�ù|<�ùÈ㫪èùP ‰;wóò¼^ðÑG‰kÖ ‹KN¦ß„@@–éýï½T”Ç Ò& Q æçéW”Ï>#~ý5½!9Ðâ"ÕÅââØ‘Þ "ÎÍ ÏGœ�½£Qz_•Š@çô´¢€}}Ñ(ù!ñã�q[1¶oq÷nt'cÏ>«(n.qÃжÙL”eE�?h4Š‚{z22Àp˜âÓ&§(/¿LŸÓD� 8@™úüs"ÝLW¯Öש>b1êì¥%ܯàè(‘7£ŠLÌ�ô©ôù––�·ÁÁë×q®¯¿qüôS¸ú®ÃØ /P7æçSÕååQt7m¢ügfÓÒfgA½Þë…/ð|0@ÞÖ¬¹5ŠB›¬¢Ð†‰Ð„‰Ðä �Òd�Ge%ñøñ[ã‹Q½ÄbðU� ?ëì\ž.×µkЭÒf#ý--¤¿¤ä§Ÿðí÷߇«â ¸v ,( :xúiª£Çw»Á‡¦¾ºï>жÑèñàwDÑçÓjÑÿ¡|!Q ¶o§3m¸á0mp‘m,‘õI4Z\Lúir‹Fi²[X ¾YX YXøþ{bm-ñäÉ婿]ek+é/+#ýû÷S ìÝûã�àž=ä²Ï (‚€ú…DõÿøåèEÒ-ŠÐÝý…8 Aøûïåyä¹öí\®.JKe™êâ¿ýQ]Íqð‰æfŽÃ/ttˆâ–-ŒY­IIð›-%åõ×á ©©ˆÃÄDJ |ÜåJIA,= “ óÎÜœÉ_Àô�¹8Ñëá‹á°V‹ VQâúQÏ^¯V‹|»\¢½¢]W®hµÿüÃØÐ�NêõàÀ@b 9‚ìÝÉÿÔ55 ‡9±¥…çñ—ÝÝZ-îÃþ~£{ƒÍ––öÚkˆÃŠè ‡#=ý£��·I/Ó˜¦ï?Ž÷×Z(¥ J¹ÔB‘ÃJä>F'E�8 0œC—C.i"GK•ÃÂä\€Š R&LŒÀÆÃdd¸{ödÙ ,[¶d3YÞù† ¹öOþ�^ùþZÊ÷ýù}Ž÷Çʪ¾ q°´Ä¬�ë6 ›ËÅæúý÷ØLHnfö×_;S£qw‡¾Ílltuݽ_¼Ø7ÛÚ(ÊÜœF2�œh´G�8 oo܇ÇÃÞ0?om øÉ'¶¶™™ÈW�}reE €ÿ[]µ¶îíEÝ[ZNL@7�‡z@ 0¿ý–Ã�þµ5“?ÿ¤Ñ––LMqÿ™ ïùsä �½ƒ¹9ž?|Èçƒ XX€##|>â5óñ‡¸\ôAÔ êq‘ÏG ÌÎ’û VVÐ9:€z=áà @€Ïûûml6²·×Öv;Þº%•B×fnŽÇNu²>Gt:##Ì½ÞØõa0p¹È¨éiKKÌ�¹9¡ûÓ⢣cZòÁÁ¾oeE$ ðì™�ÝÐÞ»@€9‰¸|ù%Þ»µ5üÁì¬@€º˜˜°±Á\xôÈÎy14$Bw_ŸP===öö8wvŠD8ët{»£ãnlmE7ÛÊÿŠÇ¿~‹ÔIG‡‰ êdx˜ÃáñPw< üÄÔ”­í¡CðO"ö§O?uvNM¥¨åegçâbô9''µñpr $Ÿ#ææ ±©LN:8À7Ž�‰Dˆ‡^ïèˆ>Ùßïèˆþøî»NNÐÑÖæâÞ¾íâ‚8hµ{ö�º'ljÚ»w7¶·ãmmek«¿?ØÒâãjµpG;Ç£³Y€÷Ãç#/ kkÌ�™¡‘œŸwvŽŒ¤¨¥¥}ûΜ¡ÓÅâ¢"Šúè#‰¤¶–NŸžÞ¿¿«‹NŸ˜ps3(jdD,^XÀï‰ÅŸŽüÝ·þ±³S,F^´´ˆÅˆƒZ-CÇÍ›nnèµµ xãau5aU¡Jµ=u:l[ÙÞ ¶µ’¸à­nÌÒO×û‡NGúé½{ööÈ‹÷ß··G¦LM¹¸ˆÅˆƒ«kHôzzž:ÅdŽ�K¥¹¹,ÖȈ��JÅbéõ>>­­,V_ŸTª×3™wïzyMOÓémmžžè—�—æFc£·7ö�ÚZooô •ÊÃ}²¼ÜÃzJK ¯!ÏAœ‹‹Éy'Þ½‹©¶•: ºüÎqÙœ'¸7xÿþþý蟓“®®è›~èî.QÔØ˜TêãÃdûúFG÷÷¤¥±Ù��AA—.q8øÝ†6[£ ìê26nhð÷7X¬š?¿¥%£ºÚßÿë¯)ª¢Âß}³¤Ä×~!/ïÀè¸pA³³¥RèÎÌÜž ÅÁƒÛ±§' ÷߉ÿ RGØ&пH¼&'ƒƒI¿ ²²¢¨žž€77&óÎ�ÐP™ŒÃinã .·®N&ËÍ53«®>|X¥23Se²ÖV“ÒÒððû÷ÙìË—žŸ72,|ùåµ5ãâE™ìçŸ)33$qHI9tú““ýü62) ¤¨ÄD__œÿ‹'N /ò½÷ŽûûÑå7[Ó¿qêêzõUpx8: 'ß –Ë1Gz{årâ7årgg&³¶6Ê×—ÃQåòØX>ÿÊ•èèÔTKË¢¢èè«Wùü¼¼£GÕjsóœœcǸÜÌÌØØ'O8 pm�ÅJK‹�ýé'#9Y.Ǽˆ‰ †Þ#G‚‚ g�r9žÿïÔëáòþŽŽÞ»·D£uw¿þ:꣥%>ÞÔ>ãÔ)‘ˆÁ(+‹�—J9œüüøøˆsóóçOžLLÒÓOžÌ˳¶>{61ñæMKËÓ§ß|³·—ÇKNNJÂŽ’��œüô©±q||rò�?2Ç�'$üñEÉdÑÑÏŸÓéááQQˆ!�öÒK„aar9¹>|ûmÜ{+SRÀ±±³gAƒáÜ9ðñc¸ ÌñŒ ðÁr LM%úÉß·¶ž9ƒ8ÔÕ¥¤ JeZšPH§^^lvzzZZX�—””š¯yî\f¦µu\zºJeaqâDF†Ngfvô¨BouþXU…- s©¬ ûGF†R Gž”¤RíÝËÄÄTVúø‡‡——GDp8ÁÁJå[oq8••—.ûúVV65±XR©J50Àxx(•33忦TÂ_¹ºVTï�H®]ûýwÍÍ­¤>ÊÍíòeÐÕµ¨}T,..&,,Ü�‹‹Øv¶ò³Ï”JðéÓ7H<êêÀÉÉë×I \»ÞºuõØØXVÖÔ�ç%%ä{YYuul6ô××ÃOÄĨÕªVK$ †TÚÐÂdz{76&$0™ ˆ$ƒ!‘hµuuµgOs3öPµz|œF 5ì¡66��ð™¶¶õõ¿ü‚çµµðUvvׯVVn¤P¨T‚¶¶Ûq]çV66‚ËË­­$Oš›I¿ÀV?_[ jµ55Cy^^®Ñ€99Z-˜”ÔÖ†=,&F§CŸîìÄfâî®Óa/uqéè€Csqéêzí5ŠrpèîÎÎÆ½{{«ª ·¯¯£ƒF³°èî ¥Ñx¼®®ÙYÍ̬³su•F35íè@^p¹--¿ýjµØÇÌÌÔjì§<^S974üÃx™ÆÄUaøwï� †¥Ä¥m´©Õ—±¦¡¶Õjë’ÚD“º6iÔõƒMÔjb"”M[µ¶„Ú"S (Löud� ËÚ/îû§žœL¨ÿOOî�{áž÷wÎ{ι�/¾à{c98xêìé9qBü¢ ß;wîøqxêÔÑ£0?_túðãGáþýpï^yoç΢"žÛ´©¸=nºÉåÂ/RSOŸfî^±âÌæî””²² ;”JJ/Ç‘�ÎÊÊ”r8ªªVÊf;wŽ=Ìj­¬lhPÊd/ïî†eeìc†Q\Œ_ÆÉ“Ì݆qâ„°!™˜Gc •—ó½± (-•z66ž9C<ååee° €¯b¾½öïw¹àsÏÉý]»¾ù¦§sŠJ­__YÉ{©©••èátºÝÉÉJÙín÷õ×C�gûvâôxèTf³Ç“‘Á÷××S‘Lä(«iÍÍUU°¾¾­ ÖÕü·{zZœKxöl4KK§¦àéÓK1¢ês ¶–ïîêòxCƒ°¤¤¦æå¹Ý<—‘ÁW)õì³rýÈ#µµpëV�Þz«põjaBBm-:˜Í è ëÍÍk× ™T5­¥…IDÓZ[_�7::>ø@øÙgÐëe/űëê亳67�ŒHæ kkGGaM�Ðí –Ê–ß£ µµW,½^ÑAXSÓÚJ,,ĵ•zï=ªT©çŸçTˆ¿½]âoi�iiÂ5k„))ò¼ÕÚÚÊ>ªi —]&'›Igç¶m°«K&²ÎN|‚kqàÞÞÜ\èó�< ý~Q¾¿ßë•ß%®®.‰×ë a{;>‚ÃÉý–¹ÍÑѾ>âŠe Ðßý~XZêó¡Cf&]K©}û$wïfTêÞ{©V|àÛoá-·SSå´:;ù;&SO~I<©©BÙûûï¹GøÄ0�É%”Ž fgÃáaÑ!ª¬„ããrpdDÎ?]úû….\ŠÁàð0ßËÞÞñqâ®®…¸>rdr’·22ÂaøÌ3 ;DÕ-[DŸ;ïÞx£ðòË…Çà {—aŒŒ8�‡è09yÛmBÙä&& { †Ã¯¼§¦Þ~NOKgšž'Ÿ�-.JeÎÍIæÍ͉ò³³ýýò¾Ä;9y)úýSSÄ˦¦™Ù“ææ¸>p€mW©¼xQòfFò@þZzºädž }}ðºë|>¸b…è³ â º>6Æ �«W/ÔabBòabB6¼pX&ôpxß>‰G&•y=¤3OOK‡Šè23#=;+u3;+Ž57ר­×B65]¼(qG“î ¤)õê«t ¥öìùþ{xÿý’[¶Hü7vuÁ›oîé�«VI½8�ü ³9…RR–Öáî»åZ6Öpø©§¢óB&Üéiñ‹y=Ä7¦§ÅG#ºÌÌHýÌÌEç�øl4].‰7–™™rþo¼!:<ýô?À‡–.´m›ÔCzºÄûíÿºu’óõ00@^™LÁ ó5ñ3_Ãk¯]Z‡‡ Êæ˦‹_LM½ùft~0oq�™­‹Ltóy#õ´4?ýTÎ9–ï¼#ñÇÖÁöíL³ìµgl \s�ÜON–ê´Û#¾0<Ìü¤ÔØó:¬Y­Czº\?ð€\G|bï^¡lróu«‡l>]æóEÑii¾û®ÄËH <þ¸ÔÁÎ�2ulÝ]hóf‰;-Mxà â +WJ=$&J=X­CCfóBüo ˜+ç}brRúÇ|^H�„ï½&qH½Düt^—Hý <͈N ™�-®ËÌÌÄD˜•e·sžYY†A^geiZ|gl¼Û×§ëø|˜XM&úùð°®××CM#†ÑQM#ЃüÒ4Î�:àœ©âá|þYâþõWt65)õÛoRc¿ÿ¾˜99dóR\µ:/œNê#6/"õ� Ô³ áriÚ† â�ÌË 6ùÐÒb³¡C{»ÍFßëê²Ù¨-ŸÏb9z”óµXÊʈÛ0˜ó˜ê‰qlŒ Œ\Ð4ê{pPÓ8w¶Ÿ_~%N¶Ä?þ µšöçŸøó<¹þ¼\G˜›ËW/Ïÿ'/¨ôNüòKf¥JJ4�¾Q]m2mÞÌwÙí�>ÊwÚíèàõ: èÐÛk·“K~¿Ý~ìyo±TT¿ÙŒ/²å�èÁù é:ñãÄÝÕ%q45éú_I?úûo¥ÜnÃàšs€UUÂòr“i)æä¬]+çͼ<éî‘üXÞ/Ä7 ’¼8vLÓ¨t oœ?o±ˆÇîÝøB|ü /P—Nç[oá_ññä’ßG¿ µÙð…±1³™Ÿ˜Ðu|dD×Éÿþ~à þÎN‰»¾>·É+,–þQêìY«––Úl°¤Äfãw—kiæç¯[G\Ë1’ó¾ÝG"y‘›+yqü¸¦QeeºŽO¸Ý6}£±1.n×.ê!1‘y¨³31ðùȇ@Àé¤×†Bµ<9i±à�££&uàóY­ø^G‡ÅBýã3äAe¥Íöï¿Äk·¯ËåpWQQ\ üê«„î –f~>§¶˜Ÿ|B÷#/ˆfù:ùøc‡CêÄ0l6þ¯®''£ƒÙÌ55qqiiJ57'&>ø q$'³Gvw''Ó/üþ¤$f‘¡¡¤$—‹ºHH^ºp� Mò†¹¨¹9>žº¨®v:Ñ£¢">ž¼(.v:‰ãÄÿH/³ &Ï.Ž¿Ià…@Ø DÊf" ²„j Tv—�eˆ H0Q�Ù ‰�”-P ŠHÙZÆÚNµL/Ú;ímÛ«^µ�é÷Ÿg2"J¿oæãæ7ç}gÂsþïyÎùŸa;;P£yç °¿ŸP¥"ìí%ìé±·{{‘Ï~4겟 Æ{ÒÕEüÅÈMcŽLM™›£‚–—­­ñKvv˜[[ÙÙÐ�Ë­®ÆwvtÄ�úê«P[[\îãǨGGôü…{{Ô…^ïà=t:{{8ý¡!ÔE¿£#ɇËE]tt 8Þ»çäÎÎ \NØÚúvÞ¿/ �7‰¯ø¦ {otvZ[ãž [X@�™³´dg?±¾îèxêêÁÙ{䯆«ëåËðUnnðæŸ}æá�ya0¸¹ÍÎRÔƒ..¥ÕººbªÕÎÎè—JåÁƒ¨‹înÌ …ÂÅy´´¼ûîŸÂ绹� îî»yãÆë”É<>]]LæÀ€··VËd•<žÁ€:óò‚ב˽¼à¡Z[ ‚·oóx¸2™—ò¨¯÷ôüë/Šª­õò««y¼Ý¬¬üwj4ÑÑ8ÿþD·S• îw?!÷fx˜ÇççÝÜP îî¨ ½þÐ!__&sdÄÏ/.ÎÄD£ ÌΦiÔ[u5Mwvµ´Ðt[[@Àýû&&MMþþÓÓLfCƒ¿ÿ_0R)Ÿ êêüüp/jjüü ƒXìãƒ<.]:r,:z,(ðõóóIldn.y¾—Zí™38÷~ ‚ zE£>j5Ñ¥¯Ó:‘{49I¨×¡.´Ú€ÌÕþþ€77&³£ãøñðpš–Ë‚”sóææðð’6ûÆ�°°ë×ÍÍ%’°°övsó«WCCGGiº²24tu•Å/ ÝÙa0JJÂÂPÅÅ!!ð�çÏûûÿý7EefòùZŸ�¼RSÉóädòü¿q|üìYœ{?Ž�%%&&‚:]BÂn�†‡Ñý@¡�¼'z £ nNžÄ Q(NžtrÂ÷�Œô÷§i‰$&&.ÎÒ²ª::;›Ã‹ã⪪¬¬JKãâîܱ´,,Œ�³óó…Bx�œœøøímëâE¡ûTFFL ü²P‚¼ccOœ££I MâS§£¢Âàà·Q¯‡ëÿÿùé§©©¤.àÐ_‰^ EBüECCR—ËÔÖ&& =JÓee‰‰––yy ))¶¶ÙÙ‰‰%%66.œ=ÛØhe•žž–Ö×ga‘––š:?oföÑGÉÉOŸš˜$$¤¥½|ɼÿ~J údDÄ™3è¡¡B!òOœˆ‹{�±±ÿÆ……¼<œw?..Âõ Ÿ€ËËEEàãÇØáÓÈ{¥òÂÒ'°Ežb ÿsç CMMV|·X,ñx¦¦"Qv¶@a‘šš•õá‡ÖÖII/ÚØ|ð�H$•r8ññ99ÝÝlvlln®^OúñÇ_~ÉbEFæäüø#òÎΆ¿ ÎÊ‚¯ ÌÌ„ |~FòâóÓÓIœšú¿pe¥´çÝËÕÕ²2pm­¼ ÜØ¨¬·¶à~à�0õp?.];:rr@¹œ°©‰èxíZQt(ûÏú¥HTQááÁb¥¤ˆÅÇŽ™™ÅÄ”–ÆÆZX€YY––ååÕÕlvHHy¹BAÓÁÁZ­‰IEÅÊ ƒáï_^Ž Ó×W,þåŠ:rD,FŸðñ).Æüôö.(@^‡çç#æñòòöï÷Ò˜×~|ú´¾ ÜÞ¾yÜÜ”ÉH?„+Æü z´µý‰~ׯÝÊËkk17D¢º:8®ääúzgg#.®¾ÞÇÇÄD ¸ví½÷h:(¨¾>#ƒ¦$’ÊJSS__©´¹™Åâñ¤RxuOO™lq‘¢\]¥Òo¾¡(©¾ÊÕU"�Ÿrq©­…�vu½z•ðÊø+—ÊJ_¾L⊠Ä{¹½ÝÐ@ò|�Ïž56‚/^` ‚ÿim%÷„<×éˆ ]]uu`s3¶&Ìs‰¬®&ºååݺ ’“o߯þÓÜ çÔÚêîŽïØÜ|ü8ƒqøpKKb"“éåuçNI ƒáî.—ߺEQNNr¹JEQ Åä$EÁ…cÿ°±ikC]`;Áü°µmlD]ØÙ54üñb#e2ì!662Þim-•bîùÝwíí8ï›ìî¿þº§\_ïìçæîÞ± á>´´�>ÐÜ ÖÔ�¸ €è—–vï ægttg'îDZcÝÝÐÃÓ³« û˜“q \®Juú4òíë+,D>j5ÑÚZ£ùäŠâpÔêÑQŠb³Uª¥%Š23ëí}ò„¢LM•Ê~»»±w˜švtüú+EÑ´BñûïˆïÞ%q[Ûn𙵴üöÛ+îìhµ8ï^>>4>y28Húb?òÑë•JppP¥ œ’¢$’ް´´« ÌÌìí…¾>0(H­Æ>æé©V£>ìí5Ügh΋ÃÑj£¢(ÊÂB§‰pÞ±±+W�ÇÄ”g± ÄIX¬ññéiŠb2u:Ôƒ¡Ó}û-82òóÏøO ú&×O?�ÕË— RIÞ÷öâ9ƒÑÓƒØÈ/ $õÿ:··±ý¡O�xiilŒèðà¨Ñèt`SÑ©¬l`<~x<}_}]§½½Iìè¨ÓA6{bÂÊ yNNzyQ”‰Éô4œ+“9=�žNˆNÃ`ÌÍ‘›h0(•äfŽ�“ʄ令ÙÙ­-P¯‡ß¤¨‰ B�Žpd{ œ ‰� ÄþfäóçòÙËím­­ÅEpi‰Ä:ÝÂØÞ>7‡_©ªÂס¨s禦Àøx‡‡“˜Ï'±³3¡•ÕÔ‹…ï:;kg‡' ĉ ‘‘$O耘L¢¥%©\^Æ^JQŸNumm~ \]E]`Òíì€� ‘< ’ÿü<‰gf¾ÿ~îìln"¯½|ö ® :ÎÍ­¯ƒ}}««àÍ›kkø•ÜÜ•òý=£¢ÐÕ)ê¾Ë>&ê:Žã¿ßïî~Ç�'È£<$â©�%�iÙƒæ´1ÝüC׬–º¶ÔšIêÆJ ÒCŒ¡ £”VkàS@¡€§!"J ܸ«æÖósýÑ^ûì·äúýýÞ w¿Ïûûþ¼?ïÏ¢Er[‡œ#"ZZ@]oi±Z¹çS§¢¢äù,�ú$�¶µI"ko—„ÒÙ)ÎûÅEEà—_²—JoïG vv‚==—/Ë社ÎNá¡£Cxhküì3y}"^¹ré’è`"º\}}`[[o/xìXOXPpéýâ‹t%sàâEpÅŠŽpéRá%5õÜ90!AÐnþÌæövö ž76ìî^¸¼xQ6—ÞÞ À¾>™Ä.Wn.xù²8oÿÑ£àÀ€(lpPnäÆ á¡¿hH>/Ø×'ØÛ‹Ÿò;RÿDäSôëdìîüôÓ�°¼üúuêÈÎ&Í)Jf¦|kF†¨qÙ2éÒ%K„�””Ï?££quü¯§GúÁå¨cÖ,©#- ’ÍdxX’¨Û½};8: ftT&•ÇSSz½'Nž9ú|.ÈYêv»‡‡åIoÜ�óÐ�œÇcGÇÈuNƳgGGe.x<`^žÏÇ_=ÿ¼àúõâÆ«VÉ·/[ÖÛ ¦§ Ιãr�ááW®€AAÌ M»y3,Lž�üÀó ·n­\)(}12"‰}dD™Ç“—7‘™DccâÄccÒ'^¯t¨Ï':ñùä† ž|>yBŸO-øþû·oSçdlhÌÏg3HóìyÂCF†Û ._Nša.H,\(¿/µÛ¯^å{týÚ5òƒª‹?ÞɃô…Û-îÈHf¦ $ÔÑÑìl©Ÿ|5žI8 �L¬±1™ü^¯Lº±1ql¯WœÝë皈’2îÄÂBÁ¬¬Û·Á�¿ý\»–­—\(Ýõè£Rÿ¢ERr²è :Zt |õú2›ñGê'O‚IIy ?p–MÕí–þðëâå—¥~I° �$:�G’žŸÑ‹ÑG“yšˆ{÷Ê}OÆœ Á-[¾û\·NÒÉã�‹¤ÎÉ:˜5K^Ÿ1㫯@›m`@|áúõà`^¹y“ ÇÔ<¬Y#g™Ÿ##äPüÂè?Ò/ �$Zƒ¿^ ~&ó4 �ûžŒF<õÔ7߀«W‹¬X!SÉÐÁâÅRJŠÌ‘ØXé‡ÐPé«õÚ5‹ ††Äó ùÁí&_û}Âí6úÃÐÅŽ Sñ1:9ï“ü|AIú~ÝL…ÅÅ!!è6–”X­ø[q±É„®‹‹U•ý€­ }ó[qqä ¼QÎäÒõƒ’æÈV¤=²6éOQ¨‡ „ÎÙ˜x>ÏK&â~ð6þ–iCãq$vMüä=nƒÿó>7âX¼Ç$¹9¾�üÐÕ¥(t4çñèt†„H>œ‹Š‚ƒ…³Ÿ3x`–‡‡+ÊþýìJŠÂ Àgx`ëX²ÄÏÃÉ“ªŠÎñª-[H‘ÂÃéÓì¨RßÉ´e¿äùèÛînE!'P+\�ä&Thp@ýüŸºá�ºù;ê£næ"ÈÄ? % ~ýµ dÛ Œ†.œN‹Åà�>'Û1ÿèItäÎóçË�.^LÇS¨EUѹÁé]SzF|'õãoÔÕØ(uâçt �Ôʬ£~ø n>OݼG�è€úø>&<ÐÙÔ�Óƒ8ž�N§Ló@xð ÝŽ.&÷ 4}:ùRú DÛóæ±Õ)Êý÷“C™-ôƒª>ñõ«<´¶ª<�2ñ9ž‡�•{Äß©Ÿ~ n¸©‰iÄ?ÎÔÜ7õS7õ²_òYöH´ãõ ‡L|øÆñ&£Ó‰ºÿ…�©ú]ýaø¾ÁÞÄn˜ž®(MMÂÛÙúõð iðpú´¦ÁC[›ªâ-]]ªŠ§p¯l¬Ô‰‡pçÔ`hŸiÄ~ÈûÔÜ7üqÏÔO½¤Z&=)ǧÿ@üD�ãÑé”i  Ä/üºÐ4ö£?Ð>Á™\€¶ï»OxX¶ 4 N�2™6of«2™víây5íµ×бª–•I]ôõòÌFÏ3…ñ?ø Ïá�ºá†ºÑ í0ñÉ”L>¼•3~Ò—¼Nöäl`ii|Á.tÏ=ŠòÁªúÈ#<�É´n ÷b6ÃC[›É„ Ο7™ðÖ 4 ±å‘©ÍÂ}R?<õS7ºÿþ{©û‡ÄK@êýñGé-�y~øáÔxèÐìÙÔ'óq§_LÔ…ÓI¦– OIáwTuéRîÖdZ» X,›6¡u‹eçNtl6ÓSla••è@Ué´�Ö™uøgêG/Fý$^4óÓOR÷/¿È\úùgáŸóñã‚ š6 ëë5�ÏXZŠ«F?t¿¿Oü9c¢.ÈlÌ�º:U%'�<©iÌ�ÖV‹eõj|Ñj}öYê×õW^Át =¸\ ¹wpPÓ¸[¼ŸÞp¹T tuiþ×Þ®ªø>ƒžùþ_e/Ò´ßÇ—M¦ß~S”ÚZ³¬©±Xx½ªÊbá\Q!øÖ[ºÎë–•qk�ñÍ7™‚~>œÎ˜˜ñ}R\l³‘›�9RV¦ª¼s옪ÒÍÍ&~y挮3?ÛÛƒ‚6n¤l6ôÐÛk³á­ýýV+³ÅåÒõsçн®Kf²ZÑBk«®ã‹ÍÍV+sàøq]Çß¤Žš«õ�?¨+(<|Øf ²ÙþüÓ�Ngp0È=ŽÇòò{丹@xø0©h>Œ>1ò†®Ó'ååšš†o67ëúÝwãÛÁÁ+W¢çiÓ6lÀCB¤/ìv2GGGhhm-<…†âi--v;žâ„ÝŽ¼÷ÞôéðñöÛӦݺÅý†„àwååRGi©Ýþ×_̯ÐPÎ……‚aaàþý3fð~~~Xøúër6°¢‚Û ŒGŽàúÜ3®÷|ˆoTVê:ûDc£ÅB¾øøc›�ùqö¬ÝþðÃð†_¶¶†‡oÛÆû11¹¹øöÌ™dަ¦˜æÅ»ïFE1Óªª"#™›ee‘‘øeiix8<EDÀCAAD„Ô#øê«‘‘ÿ­(¹¹QQ`NNLŒ`t4˜�-çÉXU…‹ÆÊÊ Ï‹¡“²²ä䩸¨¨ …�ÆÆàà æ•ÝŽo~òIxxj÷›‘�^îº Ÿ¨«KLܳ‡ßKLdæ”—'&VWãà Ì�ââè“}ûââà!//6–¼”›‡OæäÄÅQÿîÝqqÿü£(;w ¾ôR|<¸}{B¿ÿúqëVÁ^˜ˆµµ�=Æóš¦¿¢TW?ôÐT¼:1|õÈ\�þ�9>šš¢¢ÐE}}BŸ¨®NJZ¾\ÓÊË““Ÿ|Òd.NIÙ¶Ídzã�ää¼<“)?þ|§SÓöî�;—ý";ÛáÀ7wír8È YYzرÃဇ­[çÌá>7ož=›zž{.) ÌÌ|æÁ§Ÿþ¬¯g Œuu«V�ï¼Côósô(©`¼nHÑð&}TS#>[Y9w.ó¤¤dÞ<ü¢°pÁ‚´4³yß¾ÔÔ5k,–ÜÜôôM›t}Ïž´´Ý»-–¬¬´´ÿ‘^îßlß ÿH"‰¬¬Ú �)ÝLiG Ó("'¥cêÎh(ŽÉ�R´E-ô¸,iW£Tµ:Õ¡ÝPÒÒ�ºÔŠ6îg]ë¾¶F88gÛ?±ïó¼N­ælç|ûË㼃ôýz¾_—ç«„ÃIOwq©¯g³ÓÒ„BxÑädggø©ÄD¡ûC\œPŸpôèþýÐ!8xï^Ä£g@ÑßßÞþ¿°¥%,ŒÞïÍln !bú£_�·na nÕ«ºZ"¡>âî^¼(¡_¨T`Ž(•"‘½=‡sò¤»»DÂ秦ŠÅ&&))žžii|~b¢X\TÄãÅÇ‹Åß~kl #ß¹ÃáDFzy ±Xáá^^˜©ÁÁb1榯¯H„ºðô �žîîÎÎàÁƒtvssrÚL‘ˆ~®§Fƒûþ¿¬­ ¥¸I—’L†).öñÁ\ÍÍõñA}df lkËfË凋D|~LÌ'ŸlÛáë+— aaþþùù&&AAþþUU\®¿k+‡ãç~èP@üµDräöww??ÌGgg©ñ… èäDttôòú/ìêúâ Ü÷ߨÝ�œ x PÑíÑïàÑ0 1—(ŠŠˆ¹¹¡¡ð[¦¦ —GFZ[³XQÿûçìÌåÈdÞÞ˜©‘‘±±Á¡CG�fgóù‰LVZjlìá }ó&›íæ ÝÛkdäâƒ~éä$“Á_~ôQTü¤ƒCXú¢½}h(¸woHhg Œºùðà ʹµ ÜÌþþŒ Ü×1å1ß± Þ£»£ÿgfR¿HJKKãâÀÂBÊ¥26vîÌÌ„¿47OO·o?q>ÚÜ<-�Ω©oâÌ ¦¶aNM••�Z-\ò ¸lh =rs)²³ÁìlÒ'-�>��U¡Ã§Ÿž= ß}ðAêÃÁ¡ ÝÚº°~Óʪ¸�ÅÒR­F%¾óŽZ­TâžçÎa‡35U«æ­·TìV&&j5v.>_¥Âüà󋊰/ñx……Øùü¼<Ô �§Tâ¬'ŸŸ›‹Ï_çÓ§ÕÕ¸¯!Ž�á5°TT€íí¥¥`] ¶#ôEµÌÏ/32(oâãU00Û$ÞÿüyÔ‡ƒÃ… ÐãÝwKJàÈÍÍËʰ�™™•—élÛVY)“!ΪT"�wõ^ÀØøÊ•ï¾c6ûÒ¥ÖV°²²¿ŸaX¬ÊJø £òòÕU°¬lm ¼p�øõ×Äóç±—°XçÎÑY­ÆYÏñq¸7Ã|ü¸®ïÙÕUS¶¶VWƒµµ¤OiéåËNåMB^�aÂÃI7©´²tt¤ß³´¬ª‚ ¦¦W®árkjöìAœäÔÀˆÜ÷úõ´4ðÆ RøÆ ú†ºº¦&°¾þî]:Ão©LMQ¥ÎσUUÏŸƒ—/c?ƒã¥ó¥Kt¾x‘ÎÄáa¸yà ÀæLlnnj¢=¢¡,"½RS¯]CB¾ÿ ôñÁë1Œ«kM øþûtÞ¾½¶:p¹õõè›,œ5Þ¥±‘ Gc#9—¦&êÀ?ý”Ÿ¶´�²mm?üÞ¾­ÑûúÀÖVÔ œ éÐØHlhÀ~† mnŽô›™¯§ŸGF::�! tv‚÷îýü3ØÐ@¯PVÖÞNyÐÜ Êd¤›�Ï�?‚nnôjûö¡ª‘tá'X¬›7á«êñ‘3mo§‰ÛÞN:tv~ù%xï u¬îîÚZ°§}“az{á»AøM†¹> �Œâ¼{—¨Ñ� íí¤ÇíÛ›92òèÅýfö÷ƒÍÇu5ý¯ùù € Ø æÈ‘ÎNP,ÆvÄ0..¨bÌ9ÒiǺ5�×ÖÉbutÐfvÿ>9ðîn죈uÊåu܇©S PE Ñ7�ŽR>Œ�OL€Z-é08Hìï'öõ½ÊÞ^Ò‡øøñø8â3ÄÞÞÑQäqcãè(ÎjõØþZ¡ ôZP%}<´ZÔ›=6†¹‰{îÞ }J›ÙÔ”·78=MÎtvöøqpnŽœÌüü7߀/^Pzù’2ì�?èée––(þ/ˆóóÔçæˆÓÓôùÔÔf¶µétˆÏkj––ðÛJåò2(—CBàæÐ¨Kc ]\FF@[[¸ÔÃä$ÈçÏÎr8¨‡—/©/,, ?âÞ..m$‹‹Ÿ}êtäèu:r²++4�WVªªˆÔ/×Ö¨×Ö(/Ö×á³@êëët“ÕUʃåeЧÛÌ«Wÿþ'Cüê+luðEýFF®¯S ,.‚R)\ üe¥PH]ÛÊêÙ3ÐÔtfzr¹ss.æÂ‚¹ù›u |X\¤y¡Ó%$ÓÓéþ”[õ ¾¹ºzëÖ«ºÀ‡oè³¶F}d}�2tC/° €â4ÄÓ§I�cǰÝÁa»aooª‰„âwu¥ø H} Ê�fý…ÙŸ‡BüÔ>øàU ¤R:Ó¼XZ:vŒt >¡Ï‹åer²z=–—ËËéL“[¯‹>_6ôiiy•z½ÀŒ ŠÓåò?ÿÃÃáÖÆ×.y@q¾ž66ô¹¹9Õ‹‰Éì,õ…ßÇ~�º¶° h^nÕ!(ˆÎQQb"QŸÔ7_×ce…œî†.ú|ÑëCfC§úú êã4Äèhl1Ø�tº7å�HDñïÛw Ÿ@õðöÛT<ÞÜ æ$›ýü9õGÃ:H$tö÷§³¾OÐ&»‘´énÕƒœüÊ Í•×uÙªÏf–—s¹è_ÿÆŠ  óßefÆ0ð4Èo¼ú>¼/âÂn„½ [Ç�è3è¥ðáè­˜¿ ƒ¸�«))p1ØÍS¶ðÎË,&ªó ãçœÙ –†"ö+ SA ²0À”¢RÀ2€²G À™ÈÈ"Õ”²4‚ƒÒºÖ–´¦i�ik+ᢨMÓ-½ìM›^5½ì¾/Í/_NÿÖ1ÿ óø †á<Ï÷¾ïó¼Ã¤2i‚„ŒÛ“,I$ �|†Ý‚ƒ+ñŒóˆÿãì 7ÅÏ?ûL|–›âûè\ÎÜ^ž>m0À3NMi4ô÷Ô”,“‡ñ0U›M’ØBSR„ä"îÄáXÑ�^eþ-, È@n·àƆÊLC»wÞ!ñ‹÷D;ø�PáI¹uK¼;ȼãNâ |ù<üÐ�™èè€ãƒT¶Š§N­[Ç †SSZ-ó èsî,2R’ÆÇÙ ñvI:°�ææŠ;%‘®Ñ�ìÓÜÌ6€ nhÇ £ Ù§xïË—/æÚíÛB#RÏ8ßYO‚A•?¼Ñ¾ ~É÷‘àýå—â̤ãLÌf‘“î�w÷wF@ú�ºHL:¤¥‰çä£sç˜%¢Ð�:@v$t ðCÒ&¿Ãû¡ ¼¨ x£œÑˆ3ÜAê =à�6Ü7¿G @•'’„Hy$ 7˜ BΓ“¡¡‚çýqjÊdbîëuNÀ)+‹ FÆuÀ|À£Ðn‡‰÷aŸ„#¿æ:¼©zž: +Ÿçô?i gBêþÜ3ü™èô<ùþ¯¿|9£;È߇?[#PÝ‚w×…è�ÕuA0³™ôͦM’DÞ¾]Ô6~ÈÁ}ž>-î€?Ÿ¡÷¹s�¹‡ìMèŽÓÿðã³ß|Ã÷“_ÎèÏ™~éOžÓs«qzšjŽÁêB�›jLMIRzº˜u[¶ˆ÷ª¬äždùÉ'Ž¡÷ˆ×Ào�?ý€×Á�£ÿI)Ü?ŸSû ¨{øQ;$ x“|áâS ïr/$#¼h5NO'$¾÷Ç@ &fµ ÷ò‘ÕuA6!7“ð ú°¬ þвw/u Ëè°¼,ËèÀ=3àL-£µA …ê{èÇ ¤Î™{Ô<àM¦§À‹eùÛo™O²Ì†ð â<3#Ëß}/qÔ3·ö ¨êñ_}"|D­ f>ù ïg^¾õ–,³O..jµ ðÒh˜“wî( 3åã�e™»úè#Y¦·oß–ejqQQ¨‘ë×…¹ðÚkŠB]¼ü²¢0ßææd™ ¾ß�þŠÒ·?ýÄ} œ˜ÐjþyÇÆŽŽ œ™¡‹ƒãë¡úÊJŸPÌGèêUY¶Ûé[Ea^.-iµì ËËZ-¾¹´d42––L& !=û¬ÙÌÜ?yÒlþñGø…„üñ™L__hèŸJ’× öö†…ñ\E�G—.‘VêEÕer’tM=ÇÇ3GGFâ㙟'N$%Q~zzI‰,û|™™uuÌÌ  R·Ûn –åööôô³g%©¥%- /lnNOÇcëëÓÒð�ÚÚ´4t¨©IMe.ìÜ™˜Ÿ²2��>š”Ý ’“Á«WI»‚Ü ù˜TH ! 0·wî÷/ôš˜Øº õòôÓb® ;¶q£ÙÌýffâ#äääçkµ­­¹¹ÕÕ:]S“ÃÑÖ¦Ó548 ~¿F³{·Ã�UWoÞü⋲\YùÈ#dϲ²Ü\²DiiNóaÇŽìlêÁá°Ûá£bv¶À¬¬ îu^�ôϵk¤šÿ}dŽ’áM÷ÓÏ¢^úú„. ϶møj{û¶m±±²Üظm[V–V[SSPPR¢×;�…….—ÁP^^Täõêt¥¥%%ããZmaaqñü<^S\|ó¦$åå•”à£Gq19'§°ð‡$Énߺ>[¶À/--/�³Š©©âyJŠÀÔT‡c5Þ¼ÙÙÉ{>(.-uu�o¿-ðʶ&ú ¶ìﯨ}¾òrðða�mmèÐÐàtFG“±�N»]£).v:·o×é œÎÚZ½>?¿ºÚíÖj©’'%;»¦æÂYÎÌܵ‹9‘‘QSƒo¤¦>þø_�Õª«É‰‰UUÌýøøÊJ0.®¢BEüÁf+/ç v/¼u«·—÷ †ï¿ïó�ï½'ð�7pæf[8:ÚØz½»w þ;:„>{÷Ö׳�TU¹\ô¥Ë•� ˹¹��99ͦM���ÝÞÔÔÞ®())--ÇŽIRRÒ¾}dÓøøæfòöúõûö±‹<ôPK 9ÂjmnfNX­.¹!ª¡�<Y_Ï܈ˆ¨«ã Q[»-–={ø9ÈùÓO‡†xÏøá‡¤ öÜžùpäxæ ©�~zx<¢>žz ¾ut€�=ÖÙ©ÓÁßí ¥/»ºðÓ””®. 89ùàÁÒRY¶Ùnj‚§ÇãõÂËãaµXº»ñѰ°ž úÃlîé¡?L¦î= r„Ñèv“M¦®.��� ÑØÑÁÏM¦¶6ñ\ ÑØÚÊóO>a#ŽËËlPäŸãÇÁÙÙþ~ð¹çD} twƒn·Ðeÿ~�¬©Ï‹ŠŽ Á7²³½^|41Ñç³X$):úèQö«Õïß±¾ƒƒäŠ��ÁA¾aݺ�öƒa€¬®Óõ÷SZíÐÐò²$i4ƒƒì$Š20@¾T¿Ÿ}JQŽ èó±(Š×Ë ¦(½½÷Â;wÞ3..²9�ÿpw|ò™gÀ‰ Þ?uÓÚJ �¤'ž ª™ïB¯Í›Áää¡!ö3‹ex˜=Õd:yÿ0GFòòà;:ŠCiµccL �æ_ÆË=¦êúãŸsé 8‡ÃŠüÉ4g9ìB6´Ô.¬²¥®5»ØÅ.ÎUÆÝ#+´azf (RDa"˜A€‚bLJ»H†’Äål­Õº¬ë_¿ßkϾ;€þ:¶µÇÏ÷œïáó<ïÛó~íµ—_6ÆfÛ¼Y7ðûKJT‰ô �y vwƒ7²�â4É&õÀ€2VÏׯg? cf&ØÒRT„ÿ‹„ ……à®]àŽ ÒÇïÀgžÙ¼¼ç \!ü¥Ó‚ÂÙ³•OçŸï÷ó~T”ßO^Øí[¶$&‚yyóæÁ7Ðä UYùùàöíú ……ì!àîÝBæ)7¢N8³�ò }Ô˜¼<áÖ­zná–-§N)£977—•q¯HXW'¬¨(- ÙpQׇ¿Ãõ³tin®ø ““Ù¨�}¾Ü\Þw8òòÈ µ6•¢"tÀiË‘¯Z¾õ}‚É$ ÊË¥CE»,¸oXYÉ áó“'õ}ú(Ά}„ßaOåw'â믃Í͵µâûïXó¿ÿÀâb¶|1.�ùPQ®XQV^½n™’RX&%mßN›F×§ŠŠèv{qq\œîwá…º¿ {E{guÜÊJM¬êjeZM�~©¡A:ìÛ' ¥CCƒøÕÖ ÷îŇ³g�p÷î3aSSs3ü"aUÕ�A�²ð…öïzH·X¶ì½÷À… ¥ÇܹªâY³Ø éût{ꡬŒ¹a·WUá³¹§6Øš9®º:ÕE]�ò¡¾ž= ~šXMMʰ––�;Á�>jh;:ØÃ˜ôʇÖVá¡Cƒ¥Ë�:76ê¼?çúúŽøE·ßnkýþövÞ~ôQ¶ cn»M]5Uz}ue%xÙeªÞéÓ÷ìccߟ÷�Κü”ÍÖØÈÜä~—\¢ûÊ�~ø¡œ|k«&q[›&t{»&UG‡ìã�¥ô|ðØÕ% ¾üR<� Ï#G„‡ë¹…ííz.,/ïêâ~‘0'§«‹·ž~Z]ùÞ{u^¼ø“OÀk¯mnSRt«¤$åKBÂÁƒ ×ÛÚÊœp8ÚÚbbć9Á}““ÁÏ>KM¯»îÊytu=ÿ<ØÝýê«OO~>øÕWʼÞ^ýÅ'˜£Æô÷÷ôè¬þØÓ£¾x옰»[ØÙ9 ·mÓ4‰„™™ÂÕ«…Ë–IÍnë1fÑ"\ õ œ9Sщ�ÿôSÐíîìT=ôô¨/ôõḷ6Õ��›n®X =ü08<¬y iâ„Bê8##êL##Òal¬¥E¨¿8:=††ÄwpPØßo¡túæÎYY¸ˆÈ˜–†+ÁàbÙs5•SS� ,ÐÔJN–.‰‰ÊŸïøqòÉåêíu¹¨‡þ~|5÷P ¼òJñ׆60°|¹î¯þ04$'6<,' ÉÙYz„B꣣êÜ–.££Êı1åÉØØáÃÂ#GÆë56ÆÍŸxâ‡8E•+¥ÃÒ¥¸6ò@ÝöºëÄ?%Eü“’T/ ÒÇã9~œzp:Oœ¿@‡sÏ•ʇ°ª‹ÁAõ‡ÁAm,CCiiBõ‰áaõ‰° š¡�&¸¥ËȈ:Õè¨:øÈˆ|×èh}ýD¤‚|Pü"áòåß}Þ|³êâÿåÁŒz ¯¬ŒŽþúkæ¤Ãqò¤üBÿyç /ºèÌ:Üz«ÎV}¬^-LO—ª“° r¸ÃÃêŸa]ðáà›oŽ×'R_ ×xç�ccœ"á’%CCgʃùóÅÎ \>{¬ê!.Nõàv÷öÒ ޾>¯—ON�Òœ8]ö ÎòÚTÃõaå…æèð°6¾° Vÿ/;“.V YúXH v;s,2ÚlÄ“ÙÍ~DN‡ Nßg¶_|1{3a ýW ?îvË-¸\cn¿ ?j ü˜}lfÄŠw˜¶ø%v)xìÝ‹ã3†š//Çeè;œ­ÿ™Øµµ8\“VH?™h|N_àûDüüs!‘  êöl‘»ÅƃÇ%¿‰Á´il1Æ_z~€Xg>ǰ…,YÂü!ÇpÑ줸Iz�1äŸw¨i4£†qðx%øƒhwêšó¡CBøƒøJø=„/ØÙ©ß£ƒq&sA9<ç úx$ ëa³1ÿÙýðƒ~¿1S§/fÍ2†ÝpÎ qš;—-Иk®Á‡ÒcÙ–˜9ÆP³ø$8¢±FCø£1ñã»ð‰‚82‹{S“ø76ƒ£ã9qGxƒðéT¼ÃžÁ÷ðQøfrþÝwÙ6TÓxEþÍ6t€?yAÜÑÞÄäwáoñ%ï@ôG'ú2È}¨�Ü\¹ÜÈèó¡Çä¼°úæøùA}Ð7“’¤Ó¼yê‹«¢qAîÍþ@½Rcð¢‡ÒóàÂVgåN ¨{4¡ÎÑ ~äûà ~ ø;<‡/HnÁ›|¹X\låÑ<{Œ”Öü /¨tJIÑ=n¼Qyx÷ÝÊõÇ ÓÜ¢§’ßäq§wRûÔ4³�óœyGÜÉÞ£à~—ø¾ y¢-ȼF/z0ŽÈÂð‚ Ä3ʃtÇpߘœVߤNðÍä8û$ùˆ_¤N™›ÜÍå3†:NsbJ X=‘ @ê�÷˜| Èwt@gøÀ7RƒAcpÂü6›ý}dDqãlaAÁÌ™ðŠ„ùùÓ§�×ãô:™˜èAßDÿK/•øxÜq‡ú!uOú#:pw�z@ržºáuN^�çäy�ÌòšwÙé±ß¯:‰ÒÆ�6¸aƒÍöÓOÔ­ð•Wl¶„ßìÙâùï¸m›\p~~bâÄü°úi¸N,A^pOÞ$o.T?d�ª¯·Ù y„³ÝŽçª¯·Û‰YM�ÃAŒ«ªœN¾[Zêt¢ÉŽ N'µ‘Ÿïp°'ææÚíè�“c³w¿¼²²ìö~¡O zIøâ‹˜‘ápüü3=ÉéüõWf•ÓùÛoÄ‹h�=†õ™˜á¹ât¢œ¨ò•¾Y]m³á³«« æÆ®].ûdYYT{dY™ÇC KJ< r» Àãa^ääx<äĦMÑÑôË ¢¢È‡õëÝî¾>jÊå"îk׺\¿ÿ/—ë�?ŒyòI·ûÏ?�IO>þxTÔ…1-m"¾ñ·‹ŒEE¸Cú Ûù�[ ×Íéù¡¾ñÎ;v;u²s§ÓI^””DGS……±±ì�@|<õ‘�=e ýrÓ¦)Sð ëÖÅÇã9Ö®�‹£ddÄÆ2Kž}ÖçÃ/=õTL }/=Ýë%¯×¬ñzÿþo®Zåóýó�1<àóq^¹26–³…÷Ý7KJp{g�ÅÅóç�×ÅÊ“pÝ0-Âó6ŒŽV½x½ø‹ììøxÞÌÊJHŽffN�zÿýðýÏ9/óŸ(ï5Š¿ï kF‘�²™R‡¥0X�mœ² XP&:(( ’ÈRµ%¨-Ò%Ö6E¬:&"•H¡RJ¤X j) “Bš¦Ľ''¸öÞ1—_>ygãûœï³œG]Y¹q#úEE…«+rjïÞ �¥¥6 —””Èdè“……2úbA�‹ ê!'ÇÙñde‘..j©Ñ¬¿’jõjvtš½>ÛÛã⨠ný U¿œ7--ÔåøqÖ͸º"?êêd2''ì”nnPìàA//([Væãƒ~¡×ûøÀgzy =ùùÞÞ¨­ÖÓs4;ÛÓ}S£ñò‚�Öh<=‘ÉÉo¾ù÷ß‚°m››≎–ËÁ­[ÝÝÁ¨(>GDðù¿ñúõ´4œ÷ui4¦¦‚W¯b[€çƒ[F~PŸ¦&¸&Ä 7 ¯È~R^¾iòB¯÷õEÿÔéüü6oE­V¡ˆ�ÅììÀÀü|QÔh°o$'œ<) úEl¬B�Yóî»ðU@úÃ;ï(Ð!(hÓ&Ä£P�>> ¿¿¯ïuòÖ[üœŸ?×ÕUP€sþ¿¼|Ûú¢Z Ö×ÇÄ0nêa0„„€%%[¶X[ãž•Jô‹ôt¥y‘” %•ÆÅEDdfJ$ÑÑ‘‘ƒDùï?ìáá‘‘ð:Jed$¼GP�Jo©P¨TSSˆG¥Â¼ôô C¿óô Aœ J%â[æ–-|�tw^É»wÑ¥^Ÿ÷îÁb¾aê¡_P�Ó§33™Ì�½{Y7ÅŤVŸ‘š�Îïå%ŠááÛ·+•Ihhb⎠IPPbbI‰(‰‰55˜¹))ðÞÞj5vOÏ”xj�”øk¹<9>ÉÕ5!sP&‹�Ç|X¢‹K\ 懋˶m+éì ³’##˜Z–9:Šm:TV²> ðܹâbðäÉü|°²Õ.»w3?òò40--5þ3..3þ"44#õá о¾ÙÙÛ¶!Î�;á»Ýݳ³Q7æäÀ ­_¯ÕÂ3¯[—Ÿ�>±n�V áä”—?éè¨Õ¢> ²³áìí³²@;»¬,ècoŸ‘ÚÙ¥§ƒ¶¶ii+ùà²Ï2GGëêÀž t1ì‚gÏ ïP�ýû™￟—feñyûö]» Ch¨Ngk‹{ÖéàÐåòâbì§®®{ö ãÊdz=|÷Úµzýþýˆ«´óÃÞ^¯Ç µµÝ·¯³Sll Ô‡µµÁðô© XY ðSVVz=æ‡Tºgü£TZ\LêtýáY"),\É¡!lˆ–90€Í û¶øD¸@A8uŠùQSƒ©¯Ì<ÉÍ¥>ÉɤJe0n¼ývY™TЏËÊàà ËËårðÐ!t;»ªtd›êjT¢•UU”—J  &‘TWc‡ÅÇ¿ý¿|äü6øìXU=à0æçW²¢b%E±¼ {ÆïÜÁt²Ì¾>¸ì5ÔåÜ9êÒØX_ =zì¨Ó±~Þ{Y�þN�‚‚ª«Aww¾ïàpø0|–•UM 69©´¶¾B"©­MJÂùêꊊðɆ|SøŸ¾øÃ=Ybw÷§Ÿ‚ èZðKØÜp êTQÑØæä ‹!&†º„„À%b÷$e2¾ncS_�9 §%~¸y3âmlLHÀsS;Ι3Ì´æfþrK vHT$v.œ¤§‡Ä<…ÃÅ<Å{ù’ßãóÇ“gÎ Ÿ ³ó™ìé¹t ñYbW×W_Q²¹¹µ¿R[‹M~öìYP­fÞ„†òö�½¼H''l|Èl <7·ææàÆϸÐ/AN¨óç©C{;uhoGßĉ°ÀÉ _’OOƒ_�= �Ï.¬æùó+ÙÝ}ëïûóÆ�›7ÁÖVfã©Sp»‚pàÀåËn.6ø:f­RÉ<ñó£.r9ªý›"òÿóÏá/á̹¡´µ……�­­t®/ææ2 êÐÑÁŒ2™‰]]Ô¡»þ 76<ÌgÆwó&ü i4òõ«WÉ+Wž?_¦Ñxû6â³Äo¾éïg ôöâÛ55ÌÆ]»xjµÑªTèb‚ Ì[óñ� B=|ù%¸fMk+|„(¶·ÃOá—¹™�ÔáÚ5ôœŸùpýzYØÙ ?�8—:W[xï u "á+ap�q �·o“}}Ôa‰==ÐéÒ¥áaÄg‰mmCCà‰p1ðÇdZ¶@øÞ[·ÀˆˆŽ00[üóæ�7®\a=�¨Qììä†ÚÛKçÝ×ÇM¥¿ŸŽãÎ ì_à�Œ‡�ep~“ sT~üû¨ üôOôð!uc ŒŽ2þ‘Æýý÷«9<Œ÷[Z°ÅZæéÓ‚¤V;>&$ŒŒ€ÑÑýýX¦;æ#uqsëîííûûQRéÀ€ƒ^ùî;öÇû÷ƒ‚À~ # ¥ÓéHNœñqö‡‰ VØÏ?_¸>zÄL|òäî]pzš'~ú”}ðñcòÑ#rb‚º,q| z44PK¬©á¯ètü¶Fóä ÷à58•ÌV«³3u²µ c=LLÀ?á|ÞÞ{x<~ :9MN¢ ll¦¦0'EqfÆÙ™ç£ssÜÄfg©Ãì,7“‰Õd¢s�Ÿ§£5›™f3ëâ�?°wÂÂûÄÂóbq‘þjq‘'[\äÉà?ñ=ÞèüžÕ =1†…ñ¿øû³:]]™/öö““¨+«éiø&è “Q_ßÕ:ÄÆò™u17WXH à¯q^:z³™xYvb³™ iYVèÂ}ç’> ÌÜ%..bÞfeÁUZfz:™˜ÈlŠ�å}«TÔa)¼¼¿³3U·³ûõWøh©ôùsøhÄÏþ83Ã9ñª©©$7“©´tu^ÐÑ¿ª'öRž,éb6³~–õáä&ôZº_KT«M&æÝILÌê×Ԅ܇«G¶RÌ?x ôôxÄwäˆ lÝ 7 á¢÷�ïÃ|ò‰ ìÞ 7,¸oœ÷ W†ø@üOœï�8?¦3îw}ã\œ‰É…÷бAÌS�ûÚ5LÐq^n1m×Q ÿý{§¥tuP6®-´Vœc¬ÀVaÜ"… ¹# àÊeQœaœÁ1‚8lll1aºh¢‰Ñ©™¾©K|Ø“ñI}0Þâ-&ooæ““†nî¡ó�|ùµÿ¶¿sÎ÷|Ï÷Èýñãÿ7ä>š‡Œ ‰�ú’‡ ;”b7bÐÓ{÷Jžð‰Ï>‹ç�ϳ?�‡Ã‡•b°™áÖ©3ñ’+GF¼Ô‘{olàº$~�yÒÿÄIßsf~à-ˆ¼yS)zá“Oä} 2_ˆ�¾½$ssJÑçä!-Mj—“#yðù”b«$̼âbü—ðD ˆ¿£Cb PêÂÉ õÇ'7¿AÜ8Qâ€ÄÉÿÄCýùŸø9ã'Aê 2'xŸ89Ó¹ÄMçò=Ÿ~ßÏä[XÐëÑñøQòË üy ?àn®ðÀï—ÚîÙ#g ðùöv¥Ðzö<y þäŒû±Ùrœ:ñÀ\<Î .¼óŽR¯¾ñsþøcAêÌëL.>ÿÙgÒ/L8r)>Z^gG æzü¨Óá‹Ñ¢¤¤Íþ Nú#ªð� EEÂò€¶“‡ÅEá=�þ+¹#VžåþäO7ˆ]çLÿãV™wœ©;ç[·ä9â¢ÞÔ™¸A¾Ÿøé-Þg7É?¸¸h6ãóâÅÍ|HÀoö¥hPËh ðc÷néuò@�Éz@ ¸ûõš˜�:ò�ÄÇ�©=\¦ðœÑ=�øy�ºó< የ3ñó ÄOß�ôq“ãXœŸ·Zñ9ñâf>„wÎ�h:?Ø#áy8{Vò€ vwË}™Ôïé§% ¸òÖ[ÂjÍÁ�rŽÖŸ~§þÄG ‰ŸY‚O¤ÎѸAæÈïƒèÈo�gÎØíÌùxqaÁf#wò"V7cûƒ9‚NÐGø$ú€¹ ðôs À%âåîÄKo\¿.䂹ÇsÑy‡îÃ{ ßGü|?< nÑ?©7¿Â5âæ¾ }-y�Õñã™3¸°��€OŒêçÝt“þN¢ä½´TxÀ¼ Ö½½¢è#<'‡ð€g8ó ºGˆû½÷DßÈs^Àwú~“~‡ø‰ä>ð¾ò>»ç(rOúii)5UâŒÅoòÄd’>‘9«ð¤?àc ¼Å/¾�>ÀkÁïÙYá;y@x†zóõ†ÄÍ|€×}$õfò»_|!ñ‚ÄÂG6CzD¯@ô‡ŒŽ±Až;çvW¼xölzúí<±ÙÈÇÒ’ÑÛ'Q½ ð¾¢—ÄU[+1FûïÉ™œÑç<{ñ¢¦ñìò²¦Á‰ùy�Žž™›Ó4t’ï‡ÄKÝéï¿–ø¾ûNæˆ�aSÄ‹ÿðƒô, =ªÔ�?Â'¯—xâÅ^�­àÎ|lêÆí}O33E·Ñ‰K—4 ½]Õë››™sseÅh¤vçÏ�ôòܜшÖ��{¢OqUÔ“îívø1<ìt¢Ÿ}}.ŸxôÑmÛªªˆoÛ6ö®––ÔÔ‘âKMeϨ¯OMeÔÔ¸\øÉª—‹y º\Ì˲²ääo¾a‡II�<°u+qEñþû““cqçÎädê'¾ò ÓüÞñêUÔŽù÷ÐCÒû÷ƒÓÓ’�'ž=éïÏÎÃáíÛ™³ iid¦©)3“Ϊ¯w»ÙG ÈÎfžVTx<ðbß¾œ ´|ï^�½(r»ÑÐÂB�‡ùé÷çä0/óòÜnúÂëÍÊ¢ÿsrÒÓ‰ËíÎȳ³åœ••™)˜žNŸÜ‰¯¿Î¶óÿq} µƒÕÕƒ’0‘|tuIß :äó¡£ ¹¹Ì“ÊJŸÆ””äåÑaÅÅùù55Äé÷£;wúýô‡×[Xˆõzý~¼§ÇSXÈüÌÎ.,d�ÌÌôû¿úJ©íÛ ÐÔ”‚âMNÎÏ·nÍËôùˆ—3ètîØ‹ï¾KöãÇ7Øñzá°èCk«èÕ„ÿ••o/nA©æfÑ“††¢"ô³¢¢¨?¾{w €’è…×[ZŠßôxöí{äêTVÆLKKÛ¿ŸÙ˜š 2sRR‚Á7ßÄ£”—ß¼©”ÃQ^Ž²ÛƒAæ£Ý^ZJØl%%ÌžX­�g«µ¸X0ˆ¾~ð“5~|ûm¶ú¡¿\\dK@ÿZZÀ‘é›¶¶ëëEW+%?{öTU¡^ou5ý‘•U]íñPϺ:”&9¹¶–ýÃ鬫ëë#Ά¼Obb(„ß¶ZC!¼uBB(„N˜Í¡>Êdjl„ÃÃÿü³RFc}=sT¯¯«Ã€¼®×8ðÓOu5þ!Š×¯³íÆ�¯½† F'©>Iò1>ÞÓ 9"ühiilkj�¦&0?¿¹ÿ™‘ÑÜ /œÎ–|¸ÃÑÖFX­]]ÌS³¹³ßm2uvÒCW;�^ßÓCèt½½ø M ‡?ü<|?…òâŸ4­»ûÛoÁŽŽÛ±­ ]Õ´ÖVyNð�7Ø ãÇ� Ô ]Ä £‹’�cÇpCè¢ôOc£ä%ììý~áMv¶¼¾eKw7óÔl‡™§&ÓÀÎÄ8ræèt‘ ¤i‘Èð084„ÏÒ´áav(¶TvT:Ÿ Þº%ˆnâáÎÔ´‘‰hˆ÷5mpP΂׮áfãÇ+W¸ ý 85kùuœ¬R��8Væw$îÚõØcn®äÇå}±XÉq‘M{üq&®¦�Ž¢šöä“(°¦��É'ÆÆðÌ(1ºÉÆÀÎEGâ·¹ zÁf‡¿d³¿À�²Oð¹»ãñãàÆ8~¼xvâ›å6““’ŸáaÉKS“ð¦´6Ó¢'n·ðÆé”³Ñ8:Ê>‚3fá>’‡ÉIœ‰¦ML„B™Ô´S§„‰³³ø+¥ž ßͦ‡¿ß\Z’þ˜ŸG?yNð¹çgg?ÿ œ™‰Å��µ5t<^\_œ›Cµ¨›$])ù©ªbb H>|>韌 A»}|œÏëõèõÇuò¤äáÔ)qhÓÓ2�fgÑ ºdvyyz\Y‘<¬¯_»^¾Œ¯€±Òkk’�•Á_\^–|DñÜ9ð¥—^~™{Å‹««W¯‚SSÂÊH÷�ÿÃýƒ¥%Ðï—Ûææâ 釩)0!az¡ÓÍÎâ#p ²±-,H=ñÛÿr^þ1Q× ÿÜÁ pÇqüPð@�㇥$¤f¢2¶ –9ÙHÛ[-VsÎå_YmÃb™Ã$ÈBaš“R£ÂHGù •à8�V[­Ö¯ÛcÏ}bß�ûþÁž|>Ÿ»Ï½ŸÏׯç‹M޹�c— ª¸Ã‡•�G�²—2Á¥CkëéÓ[ÛÅ‹à±cCCúœø77ƒMMº6ñÈ‘ÁAT9qBêÌ ß|³½ ¬®V6–—ë647ƒ99:Ýý÷«n’“…ÑÑl\ôź:æ…Årà€Û­x%&‚‡=ðøþûrî��š0| ššØÃॠìèP¿üôÓ–ðôiü•atw«Oœ9#Þ§NI�ÎλñÄ éÐÞŽo½ÕÝÍÕ\q×®Ï>++•…ÅÅ'O‚?,] |Pù±t©²váB¡Û­z²ÙñKssL O>þ85Uñ\¹<~\N½­Mޤ­M“¹³SuÑÕÅ> u¬sç”™½½§N�}}Òá믥CO�ðüù»ñóÏgâk¯õõq5Wܾ]XVÆÔ&Ô�V­êèW®üäÐëUžÄÆêÚélmU_èèp:ÑáäI|NU›Ù™3r ÝÝøœ[y¹xjòœ?/ÓÓƒ¿„§°·÷àA°¿_sãÚ5�ìÊÕÿ­[à¥KâÝ×§<0ñÂòcǶ׹cy¹°¨w�?øòKé ¬ÌÊR~$%©Z£¢”G!!gÏRVkO�Ë¥ó³wÆ7ß,[¦sª?ö÷kNô÷oÝ ~÷�úÂåË»wƒ×¯«Ò®_W¦ Iñ¡!åí[_}ŽŒH‡›7¥ÃÐ�ðÚ5Ý7ñêU°¢BÓv®XZªiSPÀÖK=( srzzÀÌÌÞ^páBå�ËÕßO _¹¢z ŽŽÖ9µ‰Ž�©/ŒŽÊ��ŽJŸOŒÏ' ÆÇÕqÇÇÙ? Ãïgß0Œ‰‰£GÁÉIåÃÔÔ_u‚ÉI|·aܹ#Þ>ŸplL(½žx75wܸÑçóóÕmÖ¬¹t \±¢¿ôz/_cc¯^ Ž�æCpð�ÒadD}ad„½b¦ø'®5'ÆÆØ;9·œêø¸êÂïW]Lë¡Îì÷75‰÷ñã3u™œTFNë£|™šºpaZ/“×\± @ÙõÈ#R97W:,[& Å?:Z68D_ v8Äÿ.Yòß:hSÓ¦âó±o�rããò•~¿æç´ •'Óº¨C™úLL¨“Mëš¼æŠùù ™ÙÙ⟑ñý÷ Ç#}Üî�ê!4tp�¾tó&ýÑ0nß–¼Wù†±1õÉÑQÍ ŸO}ÂÌ Ÿï¥—Ä_ŽÞÔÃïWßðû5O§óÄÔEsÅÔgbBóVøöÛÄ.pÜ·Ï0è÷ä(óŸ ÆücË_bÖÃ�­‚y¸s'ÚÛê£�⢠¿È÷žyFïa>ò^âÎLÄGï߯g¸V~»¾Þ0ˆ;ˆskhÀYàqºf‚ó ǃtp�I2ù™1L:>wö¬~/22pä¼Ä•3Ò÷™éèÀÙ™ƒÄ ^xO!]ˆ75þØcøà à2™�ì ì¸s¾ËLD#§wþ‡'s‚?â æØÖ†ËÆ9 áÝØ(ž¸úÈ\éðèB ¶""GÎK�““ìðII‘Šûòåì_ʸ‘ä :Ðû™ èøì³ÒŸ€Îø%â�o„7÷àÉF ÄÄášüц8>ŒÓ—çT2¼©\ÞÃÄCO°¦ÆbÁçŠf}Gêcï^<ƒbˆÄ Ø2³³Ù•äAaá´Äšú‡+µCm£%œÙPà‰& àóá‡BâoòžÚ8wNϘÜð7yãxL¾¼—Îï0Ùx7/úx 8»>È‹Ù}‚½‰Ù¿b…ú:ÐÛÈbÍþÀ÷Ù8Ÿ%þ|†x½ñ†øsvx 9"Þô}þDZÁ›¼g»!¿¹O¾¾Küá Òo@òz&ÖÔ1×GåÅìú€sÀìf¿$¨ ú:ë²2åUe¥êŸÚ¡ñâÍá…ð¥€Ì¹�>Rü© âŽ>83ßy:ÐSÐ…w�ü Ï™- qà~m­Íf·Ž55VkXØÝy1³>à=³OãÕ«õœyÁçÑ�ø0'È[ú|ДYFn€Ä¾��Ò�~O à‡ÐëâE}ÞäüàÍ5¿Ã5õ_©g�ßkjÂÂð9�cp°ô¸7/˜f}˜}‚~�—§çäqyúiÅmûvñ¡vÌYŽ÷¡×£yotàsôxÓÿù>õ€ž8âšüù=ô ÿ@ú9÷É9�üT¾¸\ø�@qÿþ�üÑÿÊ‹™ó?Áï¢ÏÑ�¸á‰9³¾h…p¢ ̾È=x·¶ªÞé |Ÿ¹@ÜñÇfžot ïàÍïrM\@òÄ“Ncm­¶žÀÑé”.v»ôPÿ4óÂôæüàÁ¹© |~‘zFøR;p‡¼á _ò�у÷ã‹áK}gt€7: %}ÂäG¿¾qCóŠûÔ ×ô¢áaõ/°®Žî8šzÔÔ8�ìQ³ë„s2G̼ ÷37àµaƒú÷SOIê‚üæÌpç³èÅgyZÁ x>}Éâûí·ª)âO6ø�Œhƒxò~�æ ž�û&Ö×ËåŠuu ÿ­‡æ‰Y'f¿ ŽK—ª. zH1/.¦X,ò 1Û·Ïb!wöì±Zùή]V+Ú¼úªÕJí¼ü²ÅÒÕ¥xⓘµÔ·É^~¿a°›Ý¹#o29iì®ò®?þ8�xzð�w8]àX_Ÿ–&=-’ Ú"gëA¾FE©Nñ¯¿n±0G÷ì Z¿ž<¶Ùè—»wÛlĬºÚn'/vî´Û‰ýŽ v;yQUe³¡Ce¥ÍFŸÜº58˜þ¸e ›¬aTTX­SSÔšÕúë¯x4áã�[­¿ý†æAA¿ÿ>�7qßăq½�cCCVÖݺ(OjkãâÀ½{].�8¢GuµÕC]�ø«nJK½^°¨(9ÌÏOJb®äå%%ÍŸO SRÈ ûîózɰ´4¯—9’ššžNo_¼8- ¯³hQzú»ïÂ/3“ý"..#5o^Fó022=�~áõ’ÿ.—×K¾‡‡§¦þñ‡�ûáá)) Ó¹d‰ž''󼫫ªŠóŠ--T3}@? ÊéßëÖ�[¶àšˆû¿œ—ÿO•åÇïûp�/'±lF#†kƒÊ”oax 2 ,•¦‚®ù%[RTÓ kµeZdæÂ„Kf¨¨C™ÌÐIC˜P,8 ú¥þˆÏ^{ïÇåŸãìÍý<ÏyžûzßïëºÞn’¼% ŒIK[¼˜üxî¹ää¹sé©ÉÉøð¸¸”üÅÂ…©©¼éñÇÓÒ˜;bbÒÒðbÑÑ>ñefR/=žÌÌK—Œ‰ˆÈÌd¾ÏÈ Ι“žN} MKû÷_péRajê?ÿãv§¤<;;©ÒÁcs3»¤Ò�ð@L�ôgåËúõâÃëLÌ Ë—ãËãã|ÑãA—99øÏyóV¬>�ŠòzÉ4�ÇëEqááyyôÈ9sV®d& -(Àk¹Ýùù��Æ„„0g¸\«V17º\èÂåÊϾü2}Ô:/Oëûñøq:pðØØøæ›M�øøà²š|@ÕÆ¬Y³z5˜�­|IN&$€óç’‘‘……8�°°âbò#,¬¤„J R\üê«ì³¤OíÚµxEkKKé£ÖnÚÄÜamy9¾28È›ËËñ ÖnØ@ܳøúëº^ZJÜÖ®[ˆÍÍLBÁ£ÏÇ$‰/ Ëáßp…ôñRT$�,[VR&%I/±±B�çµ×ÐEHȺuø-—kãF‰ËUV†r\®òr�˵y3o´ö�7ðÖnÛ†³vçNú øÜ9e$~‚�Ð?¸Ž�²öí·ÿ[oá¯ø�Ö;vbS,�ÒÝé쎯Òåèû8;úÝÖ­àâÅâ#!Aõõ±Ç´ ++ƒâ[·Š‡íÛÉ$—kç΢"p×.ú¨ËõÎ;(ÑÚÊJúŸ‰ã¤n’™ÔMœþ’õĸwïý¸gÏý¸{·ß�¾q¯ÁãçŸãü™'qŒÌ⧸X¼deUT€‰‰8\ê t ­uhèŽ ÔKÎ 'j­:-q⯬}÷]æRk÷ì�9k«ªÄÃGáMqßä™ÙÞ 8€ÏÄ¡ã+8)ébÿþ±1=§µƒûöá¿ ll¤�55LJøC&"²’†¯/©©Tyú òæÉ'¥“¨¨Š |Dhhe%>“Ê‚¯‚QœºµÕÕ⡪ ?�W‡®©Ñ›¾ø‚˘ÚZú «µ<|…û~ ‹çÄGKËÈxì˜î=:< ˧N‰íàð½÷Žׯz½ªZK—J­Ï?¯¼‰�GÅø¦HúM s˜µÈqû|Ì£œ‹ F}½:lCC~¾ÎQzhl¤o°o1ÛÞ.Æ;;››Á®®Ó§Á ÄùsŠóÌá�?ЇŽ­ r--r ß}÷Ê+:ϲ2!þÒ˜'Ô±::T/»»� /ÆOÓ×wõªPq÷öJ==ÂK—tÝÁ‹¹¾m.$x\³¦·ÌÍ/YYªÚK–0â‹…O�Ÿäü4±tuÉ©ž=«Îsþ¼œËùóêT/JqW®44€ýýø cnÝR߸}[úÿåÅ{ã†ðÚ5ñá ø-½u‹U°¸r¥ºõK/]¹fdœ9¾ð‚ôñôÓÂyó”µáágÏ’.Ww·ò¡§Gio/uÁ˜«W•}}rè?ÿ¬NÛß/ \¿®Î|ó¦ñÀ€òb@•jpPŠüý÷înð�?ð—ÆÜ½«º84$¼sG88ˆEEb-XÌÍ•{ËÎV6.[ÖÓ>û¬x‰�ÕõG•~BCoÜP>ܼ©|¸sG“×Ð�&²‘‘œ ðÞ=|¤1££ê´££ÒÁø¸Í_©CùýŠfFJœ™¹püûïþ~áíÛzNñNLÇÇÕ'ÆÆ1?_«Ñë êÖ))ÒÉ3Ïèë ÈÍDF£·ûÞ=ú¤µããóçk?âabbÉÅ' &&T&'7mj¢ñûÕq¦¦Tüþº:­ÅÇô´öô´”8Ë‹òd–Ÿ_ äif†�æå‰¥1'GªKOgÚ£OЇ¸8Å#ŒˆÁ7ºÝ££ááŠß.Zô ð‘ úÄ䤜×ä¤tá÷ïÝû>ÔÁ§¦ÔÙ§§•'³zéê ägf†yDÉ+VHçÁbv¶âOMUü‰‰¿ý.\(~æÎ ¦.†‡�Œ¸ÝÔDZ1ê£1þ©úø_ –/׺°PkÕÉÉÉÍ›…rb~¿ú…ß/;5¥zé÷+_fù�Nfy‘^ff~øá~~Àýû�ÁÇ‹¸G�š�¯Â;ÒWñL;ôú žÂj uO˜›«û«Váò�Y»Ö<"ý—A ð~âå{82f+â=xÐâ¥F2_ð?ñ¢‰–\�‚3À…Ð�„t6=€Ì©‰Î¨‡ÄO®ñ ®?±ó‡ˆ§F| ¸îYÝ£s�s&nâä÷ |RÉ8O�ï8¼àk‚EÞ ýßü Ïñ‡x ê1¥¤0�ᵤyx@/ÔCbÅ/Á#µ„}ò[â‡ôëägŒ¾©Ä�þáß„›§?ò ñ±O�ß:ñRÁÑ1ý’ýrÝAŸÏÚG yxtòÄÑu œ:áÔKtÔ ê$÷é�<¿aƒò®Ø?µ–s$Gˆ›³ç|™§ˆ÷äIc¾ùFyŽî9o~G¼ì‹xy/ç ÏÜÿøc­á8}>—+"âaÐÚÈÈY]8}ÄéÄI~pîøÅŠ ñ@í£> ƒÕ«µ¿�µo¸"ß¹G\œ%q>¬ž@�O‚êÏ£ö�£sô‡ û�øÑ+È7p8ä1¼ù|n7~7x Á:ºpê&qQ7�ü nê%º NPà�}áÉ+üçH¡kÎŒ¸ÑkΜxÉêõŽàþé|Ÿ:È÷‰‹÷'ñ²æ\ˆŸïƒì‹ëä xˆˆÀµaaø#GŽ. A,ÔM'?¨›øöÜ/.V [¶èüy†¸‰…zÀ;�þÀ9“Ä‹.8oxà=ð€¾á�ó%N¾ Ä ¢K�¼9—@üò˨(|°XWçñCC¥å‰S/uA~�óÔKΉ: _%%â=/{¢>ð ×ù#^8!^t�‡¤OÀ'“,}º·WqÃ\2ù/ˆÆˆ“œc�G©EèÀÁº:ºßãÏçñ0G9ùâô§^ú ú(1fg+.ú&|Q'‰—½Â1rŸkœ/9€žéè¿@¼?ý¤8é“àÀ€úÑÝ»Ælß.¤_ƒx“áaqÎïˆ‡ÅÆOðHw˜å㜗ëSUeÆŸµû7XšàæPBŠ(� ‰PÓ ” 2Sñl#j6•Ù˜‡”L3b$45òfS‘f ISÇQ'¨ ’ð¥Ãtøšß\Ãóú~Ø~úÍZÀf?÷sÝ×}Ý6„…© ò�Ž>éÈY ¾É¹ð 8r¤ÎÜqðÅ!C ‡óggÛö¯¿âÉŽóûï�LOwœ?þ 7ïÏmÛèÖgM i™yNZ¢§¥“E‹pίzLšär1_Š‹].ôQTÊ\ 3&<œ>5ªKînøpŸ¯¤Ä˜œ Ÿ� ÈÊòùÐTf¦ÏÇÜ0ÀçÛ³‡ïväÚ ¥½{{½Ô!)Éãùûoc¼^x<ýeL\ÜÙ³§×{/kkQЃsûvG f›ÄI è›S’™¥�ÂBÔL^^DúÈɉŠ"� è÷³—  Í'ôëç÷3O““cbèýG�ŽÆc�èh2Fll·n»v‘]cb¾ùƘ‡öûñ…nÝüþÛ·Ùy¢¢þùíùýb×®Ô#22 FDøýœ»K—®]ÿü³“{÷RýàùÉ'ÅŰªê©§àêÕyy°¼œÔˆÞ™dFÕãÉ'å«YY�ž™ •�@žŠˆHH¯ OH8Þ˜°°ÄĦ&c<žøxÎåv轓±±ôG'{õúí7c\®ØX~^[;uªî78î܉#ã&À×_—>,6 –” óóٶеêÑ·oïÞøhbbr2y£{÷”:¨[·Ç'oEEõí‹w…‡§¦â>_Z³ÂëMK##{<ýûã—nwF¹2$$=�þpœôtöÇÉÌli1ƶÓÓñÅÿÏ´4ü±ƒ;vàJÁ³ºš ÅL“>–-#1Ç àĉ990'GõHK“$%qëè9#Ãåâ^Ÿx¿ ËÊÂY¼ÞAƒè—+;›þ <�uœœ f¯ãŒA8Nn.yÊqFŽdnØvAº°íŸ†ùù:g~>sÁ¶óòôÜÁ#ô^ܲ…I <++Å•+I&̽gŸ…Ó¦© Ï<Ã6I?¨_RR䫱±ªOdäСè"$$7'å\ìgŽSP@å8 ³íQ£Pžm� ˬµí¢"æ¨m“³m{Êò”e=÷ÜÙ³°´”<YS¦p>Ë)áþ-kòdÎoYÅÅ Ô{ñ�w؃çÛo£Vróœ9páBn�¼&�Œ ­ºdf2Ñéƒü|ؽ»|54´ €:8Θ1äOÛ 7.€ãdz‡8Nq1óö'O& q^þ³m?ÿ<Ô²ÊÊÈ—–5wn]�:ò»ïà‹/’-kÁ�sÞÔ²ÊËéËzé¥ÚZ>ñ•W˜$Tò5dŽò^|ùeqÙ²;wþ—ëÖ‘R‚çÚµÜ ŸÂt'‘ˆÉ=ªKvö‚ìa½{K±±Ü>(?q¹JKÉ–5mIÔ²¦OW”•QÛ.+ÃqI—81çegà\º‰×^cï 3·mƒkÖ°w±é�>-’§ÙPÅ5kÄU«ðþî^®Y³i“~+8®\ÉFEþ'ý“T—Ü\¶òÓŽ¹È-ÒòWŸOýãrÍžMŽ@·ÑÑpáÂÄDݳôP^N ,kéRœÆ²–/ç,kõj)±¢‚�̘M›˜$¹ýûá–-ì8ù�°ª ¿àçâ»ïŠ›6±gtpÅ Òjð|õU±¬¬²Ž ‹JÉo½SS¥�ÄÄÅ‹aL [ó ¼ �8Ny¹×+E‘§ØÄ¨÷ÌDáܪÃòårÜU«4±7lÐ'VUI™55UUð£�È™l£ß~+²g³k—ê±s'û—1~¨çÖÔÀ%KØbƒç¢EL-ö9R :¨©�ÒIŸ>ú–�ÝL?¨. Ïòåø£e½ñFx8oÞ|“ÄÉ=''ÃõëIê–µn�œµ¢‚��ó—–ÂÊJvP’¬¸{7~iL]ÝÞ½ðóÏåõõW®èYçÝ¿uøôSqÏž{9kÖ�<ËiÓÔ�ãÆ±õ0U�Þ{&'« qq0"B:q¹Ö¯'7XÖÆ�̉Î�µªJ‰«ªjÐ X]­$R]­äZS£ÎÚ¶mþ|øñÇr¦Ï>Û¼ <(¿æ¤ãÔÕ©¾øBØ×§¤èž”< R? L®æ>g̀ǎ)¹ ?®;yR>ÙÐ ç>}Z�yá—Âk×Î�ƒW®È/^/\�?ž?¯:ˆùù¤Ñà™—Gz3fÈ�£Gaf&[¾ îŒ�•^""HÿôÃÑ£š äcΜa¿2æÜ9éàüy%΋¥ƒK—´Á\º¤>¸r…½Ó˜«WÉ“Æ46ª¥¼›75/~üQ7sëÖ©S°¹Yç¿uK¼qCu¸~]¼z•÷dz�<¥ºÁƒ¥‹þý5µ“’Μ�11êNŸïòeõÃ� uhjR?45©š›µyܾ­ ­¹YI«¥EýÐÒ¢IsçŽêÐÚª:´¶ª­­ê‡¶6)°­M7ÒÞ~ℨko×7º{W¾pçŽêÒÒ‡ »|™§àùÃ0+Kúè×Oÿ¥W/ýÜïCCé‡��›75'›šbbĤ¤û×As²¹YÉ«¥¥¬L,/×÷—?¶¶¹tÖcëV�S~ÙÖ¦NmkSŸ´µI'ííÇŽ‰ÊáííèfØ0}ÿ9dˆÎŸ‘¡ç””K—� ªOdäµk䯷±‘ÜèüKz¹þFUua|ï³çÒ™é ´B§í±•©-¦­ °µ eš^°­‚\´c�–KƒŠÕVSeÀÚho¶ÕÔ)AhLJ$â¿©11ñƒW$"¯ñŸxýee§øæ5ç51çÌôÌyž½Ö³že.] …øäòeÉ ÿK‡+WdN\½ }±.$¡]»–N ÊDZÔCœÙÖÉ¢.¶^Þ}WîŸ;·¨3¸¨È= <ˆ'²o¡…Rx9ó�]ÍœU ÿ#UÖ×+E¯³7°�°O…ò÷�>ÏeCÁáË™3àJ?�¥™ÈÌòÓä$I‚éËädr¡ õÀwΟ¤ ff”¹øœºqþW^Qª°Ð=òÞø à{Ÿ.Ï¡¢ù½ü|÷hë‚3Cv òr¥È¾UURVòÑ¿ÑaÏ y>ŸÁ�š£ Ѐ:€³ÒÖuoÏŸÏðIÒ>“Ìòæû8<ož‹Ó[¤ÖèW·˜YìÌvú£¿t€:°]‘—Ñ!™üï:ðAöY³FtÀ'Ð?ü' ÈͼŸ‘ÂéµÌ~€ç:?/|Ù«lÿ[þ¶Îoämùò;Ôõ›‰A&¤.ð@r3g¾aâÈwèzÊú‚íj� ¾lzœ?‰žûìœ †÷�/çZžÌZ� ‚>¼ˆŸ1ùxûŽ[´zdúýÁïÐÖ'ÈÔ~AnDö(« Àw2ý‘{Öу| ?.çL? ?ýÀwñ}Ë›žÊäK}—~É"ù||\ëÐ=Z=l]Xß´ýÁï’›­_ÚùÉLhn–ó!/Q·}}r®¼;zpžô8ÏÅû¨‘#G¤g™ œ7~tàœ©ø“lÐ�kü ä÷Aê 7†¼ë E�ë‚÷%'Ù9j}@ô¡/à‘J Of ¼ùú„3†£+u�Fø$uÄœ©ø’ð©?xó{èÁï�èÏ}´ñ+øÓ‹¢C @îu‹““^/ûz°Gfú¾içõ‡OàèÀûã“|N= :À�¨4BžÅ=j�çÌÎÊsȇðÅ'Arò?ñýòKñ®·o|ì1¹ÏÃõØX8Lþu‹ãã¡{”­Û'vŽØº ç++¥>™Ÿð!WrŽ< |»»…/µËçhÅ¿© ΕšƒçÜœh†xÊ… —¾à|ádz@vVxRs Y�úÀ“¸ÞºU¾79)[�{¤èá0SÛ'vŽÀ¿€¾ /ü’>¥.àÛÕ%ŸÃ�ÿÑ~ÔÙ�ä<Ñ�sdn‚è@MQ=¤ÞæÍ‚ì™Èo±Ñ“\[œšb;p�¯¿¾bE¦ ¶>FG�a¿¤¯srÄ�Ð>èaç(ØÔ$uËÙ°cÀ^hÅùÑ;œ ?æÆÆ�2/à‡ðûä¥ZZ”úúky&Hfûê+™Õß|#=É}‹d\îÏÌàbÿÒý‹º¼új4 ¾øbv6}³oŸãPô#zp^±uªõêÕJ=ü°ã¬[OcàÔÖæ8pnm5†šhn6†þH&�¡  çèQjIköÅ»ïÖš}‰ŒF½'Jýø£ôàÕ«âÑ?ýDÖךk‹¥¥Zÿüó" Œ"îqnŽnÇøeæ7n€ˆ ==¸'gíýû?ቴ´x½øhc£ßÏ\]·Îç#gÖÔøýè‘Hø|ÔEU•Ï'ÙÔçÃsÊÊü~öŠxÜë=s†¬âñ�n»Í껨Èãùå´6æ�?”(ŒF=ž?ÿdðxþú 4F®�¿÷^ÞÓ-¾õnG®©­a…_“©kÑ#™\²¼çž¬,�‡ Rá0•TV3WKJÂazvåÊp˜ Y±" Æ[óó#òÄòå‘þ�›‰°W-]‰ CößÿQÿÁ8 ï@ ;¾~(O‹>_vöï¿ ÚkððáM›ä|ÝáÄ„è‘N³EâwT(}.}ÓÖFº¤n™ Ô©øk<~Ë-�<—/G©h4/�>Y¶,¥§sròóñ†p¸ o … ˜=YY±yÊï�Åð¯7£/<ž¢"üߘXŒ¾0¦°~ÿ§§ù5÷xèS�|CJÆïév|¤îE�ÊÊâb°¤D|µ åJú$7·¸˜º…JJè¨@àöÛé4¿? oo‡_<Î 4¦¬ŒÙjLy9~iLEűcUùÁ˜»îúôS°ºúÒ%¥ §ºšþ¸++ûíf cZ¹Çáaœ�¹xÿýàŽ ¤e|�©ˆŸQíøÛ'}+¾ºtiy9s%+ëÎ;q�§¢…Œ©¬d®ÂÏ7fÍf¿1µµd%�§¾žÝ˜†r¥ã45�= 67“ §µÿwœ––•Ï…¯ÅÆF¹=ŽŒàFîqÿ~&8¾È¤#¿ˆ Ò/55¸;¨øi4Š£ÓωóʵkC!°¦¦°�÷ª¯G1xÒÆ$“[¶?2¢1ííìR޳yó›o‚Û¶ÑŽÓÕE®ñ ­S)Î_ëÎNøkýàƒåþÖ­ð·xð ÉÌ=¾ðoG>]ººD—ÖVé›D÷'c‹ Ë–Éœ êë©Çih@­×¯Gcš›é$ÇiogÆ;ΦM8|ÉŽ“J±c8Nw7û†Ö½½o¿ ñć‚»w“¹¾v ܵKx÷õ�$´Âýû™Jîqp�ÍF©�;©Zæƒè²~=]­ÔêÕ¤æSÿ?õùššÐÁ˜ Âaxutˆ÷ÝGç³eK] ÷y„Ìo”†7¿úCë§ž"W‘Ì霊<Å ‘xCÁ�rÂ"ö÷£  ¢ÊÜã³Ï’zÉ€¤ ò ý õ�3ýðE™/K–H}x½72?©o&ªãtvŠ?tvJ=¤R8­ãìØÑÑîÞÍ“ çé§ÉâZ °£’6gf¤Bé’<óƒ�ïòepdDpx˜9Â÷®Ç®\A-Ò‰{ܳgdL¥Ø†È lŠä©“U«vîóòp;|AИ®.æ…ÖÛ·“ȵîé!W9Îã�‹½½$ÇÙµ‹JÒº¿Ÿ¿Ôzhˆ]Dë—_fŽÂ“þÀ¹Ož§¦>ú|ã r3×Ì’¿àÄÄ?€££rýÚkè´wïÔWn±§gbìè]jkI;äºX©âbR¾ zdeõöJ?ôõ±hýä“$ ­÷î%Oi½oŸL–gž¡/àÏÒúùç¥Ã†‡ÙÅà!•8=ÍŽ«Ôì,sT©ùù÷ßOœøâ ðرo¿� ýî;pnîzœ�»»çç¹r‹Û¶ 965Iu&lšÌIé›[o•úÈÉaãD6ι¿?äÎÀ@^ 84$‰ü¥—ª«ÁtZ’H:-Î’N³c‘XDÑéiö/x�Ž‚ ì J�:%:œ=ûÙgàéӢéS‚ï¼#xò¤èpâÄ÷ßÃCºÊ-¶µ-,€uu¤~ö©�ÒRÒŽR……¸™R‘ˆT±ßŸNÃù³gÐÇø#Ú wÈ9K G #OQÏ2™''ÙQI´Ï='üÙ¿”:sFNäƒØ;”ºxQüòâEá{á‚àùó‚çΉ ‚mmï½'|Üa2yú?¬—ÿO•åÆïç¾<ç ¾ñå )â� 9QJÌ™–#—âÌü’®ÌÚ:µÖ\µ~È­6ÃfÖœYZÍÊŠY˜Ó‰¤b &$iƒã°õ_ÔkïÝ;?ù©ùäæ.žóœç<÷uÝï÷õ¾îcmíáÃ8LºQª¤äÃÁ¼ó ‹‘”Ç�˜þâÌÒ‘ñ¸ôC<~ò$85%þ05%scrRtþ££rýûï$Òß/¼î |PpÑ"y[aáÕ«V–0/  ¶VxÜ)VU ÿùó¯\óóÁôô¡!ÎQ~ÿð0ýÌo¿‰/ܼ)sòŸt •¾ˆÅd>Äb’ÜÇÇ%¹NLHžü=$gONJ®Jè"s$ —޵úÄã§N‘•.öŽxç<œ D/ã{Ìúš¥È>Ì� ;”Zº””¤óNä#ž'7Ã�y@Je.¾û.‰‚©ª~H:!7‘ðE²$�à¼âØÜƒ7ÚàP ¼It2Hgóü¹s¤l¼Ý;¾ñ™�T$:pÊŸ©éêêé:� ¸¿|yBÒÅæÍ¤E¥ð«^À=øó7ÈÚ[[•bNò÷¡CŸI•ÌÿûïÉÙ–7H¾âyMÁ]»èiïHzœ7�Ó 3Cöÿƒ?¹ˆSW]Ý¿Ó�ó:pîhM= u7Òš­ê~$Ò ü“÷ýÂ…_ÒÝ¥Kò» õfu˜3Ç;Úþ@úƒ�𦠶nÅSåš¹p; ø=zŠ>°n>C¸Ø:°}Ïöö>?{Vtº|9Á;™/ˆÞ }¢{^žw´ýa}‚š‡9�%tÀ'Ð?ü; ¶l‘ur�uó[Éýgøq²±>ûßÖ½Ý÷[y³>�ºE<ä ºãß^]© ÞC>Ä'ÈÉÖ/ñ|úD>çüdu žøºÒSÖl?Àï£�„? •º8zTî“Üø>uÀzþŽ7Ù‹º ^Aö‰‰‡¯ƒ¼ƒ\ãÑ�ózÐÔ4>ÁÌOöKt 7£ç«ëæ;Éþh}� ¾Ñ�ûø÷ÐýðE�$gyóþd¾h‹?Rk ïïë“z%Gà;ä\¯Èº© ë›v~à‹%üÒÎ�ÛéÀóÔµ'þ³ÇÔ15À¾2ØWr<ÉÑ\“£-oË×òD®ñk�>%OQƒk«ã¤¦zÇ[ë ä$ÛÖ'ìÜ666&tÿðI�ïò £#PWü6÷ПÁ/¹F®ñIf ¼-<ždt’þúõ‚--â'k× ¶¶ãóýè8iiâOäf[ì¾ióD²’§X÷OÈ}ÖÂ�³%÷¨´ÑcÓ&yûHnäš“-ˆOÀ—\dy’IÁæfñ²ç2Ü?$°µ5 ÿ{GÑÃö‰�#¶.ìüظQ|\¼XjàÑG%w>ù¤|þôÓÂîdL«½Æ™¯X³Füžä'®ÉË–7g5ôX±B®á 2£@δœ›š”âÜÑÖÆª½ãû罹r®´zØ>¡ïC!ñæ<Êʤ «ª”Z·N|žì|¸·zµì1¼x^xˆQC,Ï üè øÂžô½‡ dú�Y>ôÐtܳ‡UzǶ6X'ôxï=уµs‡ýf®Â£¨H©U«Ä7¨×ÚZAö^|^Ô^ŠÇ1c©j‡z §öï—lŠ?À‡ ‰/<¿$»áx4ˆöèC�Z$oîÝ{ß}¬ß+~ð) �Óä믧¥qÞ|é%­ÑƒýNO>ô œ™¯\Ó'ð£.àwz†º¶0cÐ àC=ð õê�3MW—Ô:°ü’D2ÿ•+òΫW•ºç ¥¸?¿ì wïf%xo‚¿è±iÓÌ™àªU®›’"û6kûå8¹¹ì—Ö¬ñ�Œá—ÊË�a¯,0†9RZª5>Q\l ³¥¨Hkzî>p€žsœŽ¥rs ¼ë.Ç�WNŽãܸ¡ÔìÙŽ32’ÀŒ ÇRtœáa¹þõWꚎñŽ»vQ�Ì;ª�¹@õã}ì>üƒA0 NIÁWçÏOIa¾”–ú|tVa¡ßOî …|>´ÊÍõûé�ìl¿ŸÙ�‘á÷ãµÁ ßO_þúǼðÿõôùü~ü/%Åï¿vM)×õùb±ÿ‡�Àøxwï¦ ½ã›o²zr)�> B©{Ñ£ºšÝg?E�9s‚A×…gzzf&=“‘A%Íš•™É\ ²²ð¼3fÏÆ'\7;›LhLv6³Èuss?ù„ëüüÎN® ècB¡ë×Á‚‚?þø÷øÎ;8“w|í5œ™YGGã÷t0þ,z”—‹äç‹ YYyyøi �ŸOç̘QP@]¸n(Dþ4¦°�wݹs™Æ””à—®[ZÊYÑuËÊöíãóòòcÇÀp˜¾Ðº²’¾Ðúþû''“1 žšº=¾ý6]è £Q&5óŸîVêñÇE�Å‹¥>JKISô¯è‘–&èºEEø(œ7 gÇúÃq¢QòN…O�à˜‹L0æA£Qô¹£Ñ�;Ù¯¸aÕJžçH?ˆ óæ‰ ùùâ§Á �ùÖØÈ¼ ŸÉÆ,[Æ„5æ±Ç˜0Z¯\Iå³v-“Yë-[D‡çŸç ¦õË/s …gRý¡þd½üªªÃ8þùœÏ¹.ùr!ˆ+^• ŒiŠe[Íœëë]se“¾™+sÚ7¢-B…" ”ˆbdFÊðI„–SD-5SÉbF¢|™Ðú/ÚkÏÎεZ¹ ~o>çœ{Îçyžçý¼UQÁ ¥Ô† ø&”|p¬¬¼rEî_½ ®/××® b®cðŽK–p:ôÜ>AøÈÉa’B™ ÐLJ ‚7Ÿ/õû‰'Ågj½dI8Ìú‰'ðUÄO>XÖÓOã3-kÕ¼·Ö¯¿.•øöÛ|IëŠ üqSJmÚÔÑnÙBñS8þ��X¬©¬®¾|�KõŽ‹oØÎ�»n 8k®G©iÓ¤^ÒÓe÷�À²eÔƒ1ÅÅð õsÏ1h½|9ó‡e­X�'ÆE2 XÖ‹/¢,Z¿ò ³•Öee0«uU•|áý÷«ªÀmÛ˜?q¸ôQ¥šš¤>ššð\ܾýzlh£Ñ�?fåï½—Å÷ãrÐ…²2pÊ©—”É�¸¸•+ñ“ƬZEŸÐzÍšôt‰3‘:ÇWj]R‚’hýÆø�ÙK©uë¤Òª«E™ÐK¥š›ñJµ¶îßîßßÓ#ˆof ‘¸[Zð×.îÜÉõ… ››YyÅ¢"á#?¿¦ÌÍ•ü‡¥n&M/§ ââJKÑËzóÍ„‰³Z»–z ~Ñ­++™µ¨kQØÊJ|¶Rï½·t)¸y³dÚŽ ï¼îÝ+ víÂW)ÕÞ^W~û-¾J©£G¿ûìí•x{z»»% ìêâú]wI5yÅ9sD­gÍÚ¹ŒD$OÒҘЅÆF©‡mÛÈ­ÓÒ¸ÓÔ4uØÜ,kK‹8Ñ]»Ä‰µ¶Jü»w?û,¸gÏK/�ûö•”JvvŠ^öôÈ—O�:|<þâEðôiááÔ)Á'„ÁùóE]½baá�àÌ™íív¶ð“œ|à€ÔC[›ÏG=´µÉ¤ÕÞ.“hg'ºHÞΙ ,“JW—8±®.qhÝÝôK�yK©#G¤C ;¶q#xü¸(ÔÉ“��à… ø¥.]’þyñ¢ô‡¾>ÁsçbqÞ<é¶^± WK=twƒR�Áà±côŸ¯·7>ž<èí•zøátQ©3g˜«”:{Vôàüyqæ.ˆcïë“óÿåéýý‚›6IœÒ/ÿ]tapðàAðÊ•£G¥ I܃ƒÒG/]gÏþþ{V^1/O¾‰üô ýü3ô÷S¶ýÛoðÀ>èà-·Èþð J]½Š_b-ŽkhH úððÊ•àȈð0:º~½¬…‡ÑQÑÇÑQ©È±±¶6Á¯¿ÇÇ¥oŒ�ËNÇÇüQîËÇÆà§°PöígΔìËÊbÚ£ ~ý•zˆ�ïïß80À ¡ÔåËøçá�y Äy Kç 9:¾áŸ|ˆ>Œ�I~\»&•êð26&ëòsü¸RååÄà Ñ�3ËÏWŠÞΜD Ô=¹�þ¡ñÌ Ôø(…'dRå=O=¥=�ù»¨}t€nL'fÆ$nº�‰ A§¢;¡Ôàž=¸:£ ñó,Ì}ê…ß�8AWB9åûôm¯HïÊÍÅí+…î¿ú³&Óµs=Lܧ:<�ëôC<ÑêÕ.¸Uî?ÿƒÄP[«Ž…ÿ?ÿ\âÛ½Û�Ÿ¸¿ùFâPãEE7ÆÃš5Â÷Ø'ºH>Ày@¬›7K gk«ÄMþƒôKâÅWs=àwÄ —ÄMž±&Ç@ò0'Ç;õNíâ‹É{ꃸɇX ðGÿÇûã]ð€Àgèä�Sĉ�vâgM½ó\lÜN¼¼Ë‰ä»çÎÉšþåqÏÓ¦¹záè1áW¬Àsþ7Ë—Ë>¹Çyñ®Øz Fòà³Ï®×ꟸ©{Î�ówÎ^cã%oAÞ ¢kðÄw™ó¼"|€N} ÔÇßõ ¤/p ¿èð@îó õ@.9ºàÔ¹ÀÂa‰‡:qêÃÑ G/ááž{„‡E‹\ È'r&V¹FœpÁûÉÞćcGŸÉ ΕºM\èq;ñ²®óÝÓ§•¢Ãù ’?ø¯È¾ñì—:qꃿèè¥Ó7þ�¸"¯à�8y|r�çøï#÷At‚øÐ ÖÔqŸ<)µEÜNœ|—5\ã§È=�ZygJŠw$wC!7/ Ýtêþcõ’ÞÈ Á¾™£ˆ�äyîñ<1ò?×и! øÖ ?È/æKÖôKî÷öºñòNo?xT�†ž>þ¸£«Z'&N :yáè'ùI^8~ÂÑË^½dßè%q²7xâŒø#¶gž.Š‹…Çk�xfPô‚5:±l“˜' ãñ¨ø(fmb!'ÁÚZÛöû'�‰‹sóÃÑ Ç_pîÔñÈ~çÍ“ýÃküåÒ¥²wj…gùî yD¸Ñ‚‡–¼½}»ÄÙÒ"øÕWn|ä ˆ&Ñ_,$'yž=€µµÁ ó w ˜§jk-+pë„ú˜2Eê¿I ’œG^žR‹K_…öÊuþØ/yÂ58�+rœ55Ã}ê„8ÐRrŒ:?ýTjŸE¯BOñ¸ðÅ7ÑY4 d/] »�8üàƒ„xy÷]c˜/Éî �“'+õè£â3ØïÔ©r.3fȾÉöÍ=r†˜ñ ä2H®ƒäšƒ6=ŽÑ&�ÞJ½ ¤Fñ œ ÷éñ±X_/SŸWܺ•è¨Ý›nKKƒAæÍÕ«-‹zyòI©—ûï—z!t„¸ø% Þ@Μ¸Èââ Ó§‹ÖózqÒOˆm‚k¼ˆßB³A¾É ê »¤ß8X]ÍN&‰ŒÚe§Ô²ðQ\œ�>hÛ>ŸRóç+…žgjªÄ—•%È5âã�ìŸ\á ÜefJ}À õ‘–&<„BZ£›©©Zs®IIZò ßКù‹ÓÙž}.ZwvºXUEzÇŠ ÜþŒÊc¯œ”RÑ(p¶II^žÏ‡Ždgk�ŽLžl 1edCΆBÆÀhr²1™1èD|¼m?ö˜R~¿mÓ#ü~Ÿ�~êóùýÄ :¤”mÇÅÑÁááÃ@däŸX^NöyÇ·Þb×äªG- ‹¡h¡ð‰$'ƒ¡P|<:2iR  r.�OøýÁ yaL0xç�JYVRúLj½Ô˜ôtú…1õõÄ‘™‰Ÿ´íp˜þöõq? þóÏÇÒR:‹w,)¹ï>ðùç…�hÅ#PAô@øÈÊ>Q Î1%…¾kL(D^™ÄþÐNc²³ÉÛž> /aLn.~Ó˜3èId}Ϙü|æ c ˜',«°ðÚµXÌÏ�Å;îøã{ ¶½ãË/Óñt8´�ŽF� ¹¹t(t åæüE?Œ¹ùfx°¬pf,+á ˺õVôÒ²n¿}áBÙ/Þ˜ٳñdÆáQlûî»é‹æ/ÖËþ'ëú ãï÷ûæAHĆZÃJÆd2�‚À€©CŠCP̧xjÓ1"¥¦CËÔ4A{mZ42•J¢“YaæÌ'ÒD˜øw´×®}fû~¸ïÜåç~âœëœs�ëø²³™ çrr¸'œËË»}œ5kxøA´6+k4®\ÉÔŽÕÕÂE‹¨ ó@ôìÃŒ 01Q|Äݘ÷øxôÓçKH¿:—�€nRo´ßçKIa‡�7ûÓ¹ôt¿ÌÎÆO‘'ûÒç+(@ï�+ÂOZ[\Œ�¶¶´ô¯¿ôL?X;¾òöûA:–¾ð°ª ×8VV.YS5c^x¡°œ1WÄÞæºB×µ_Âßx} yòIxðù’’Pç’“ÙmÎ¥¥¡0Îef¢@Îåæâ �ËÏG'¬-âs®¬Œ=iíÒ¥_}VTà§­­®¾v ¬ª‚ ŒðÕW…Ë—‹�eËÀŠ .•ÀqÅ .cü~¶»1â#)I|ÄÆJG¢£Q?xHIao0Ïè$sL?87s&»Ò¹çžCY|¾Ü\ǹ_d�+)ao:·x1>м·mW­âÎRÑ ”ëâEpÍö 64¤÷…55z]øÊ+¯¿ÎS XVÆÔâ“éZæ�ªá pÒè#nˆýž•†…¥§ÃƒµÏ>‹Ÿp.#�™ ç²²ðÔ{£¤„°vùrnê�¿¶¶¦ÉEÇ|°Á¸;qt¿þ ¾ýö߃6 8ý·ÀµkFpÁ¶Pàè÷ ŸžáÕ'‰‰¨<úÈÅÄž¡¡99ÚyyQQàìÙ̃Ï7g:éóÍ›‡²:WXÈÞ°vÙ2&̹×^ã&s®®Ž;ç†N�{WÉ eLcã™3àŽ èŽ¿¿ܶM¸uëÍ›ú<Ïùùb3Pœ=WgLZÚºurrm-8eŠæeÂñYR"éÍ7Á¸8ÍÍøñš—1c¤ K– �Ö®XÁ fmy9óme%>ʹª•þgÓ8kW¯ÖfÚ°¡²RuV'67oß¶´à£�9pàØ1ðàAôÒ˜ÖVå¿ÿýøù缟žŽÚŽii~¦¤píp+pý ªÖ¸qõõìÉððÚÚ�òZ¹�NÉIÔÖâ´¬­«c ¬­¯Ç™‚ìMk׬áÎ  øü¦MR �>Â_“�x8tHzyôhg'øÃü¡gññõ×W¯‚ííz>||æ\Hà˜–†Ë7fúô�;ÁÄDE5iÒÖ­àØ±7ÂCXØúõòMëÖ¡ Æ44pS¡gÌ}®>ؼ™; ĉXûÞ{óæñ¹-[¤¼Û·kSòÉo€mmª@G‡":y²£üñG|•1§Nݸ1»º„ÇŽÑiib/P|úi®Yü³øˆ�ÆÄìÚFF~ü1ó²c}íŽ Ìƒ1|À}ÅœÊq67ëbÛ½›> O9Ö½{‹‹Á}û¸³Œùì3)qK‹&ïðá�•ßž=w·öFoï/¿€==šƒóç•w·žÏ�ãyæL®Ð.ƘiÓô Ëõƒ_hk£ÆŒim ¥öïW 8 KãàA]¦‡Éa 9"ÇÕÞ.öí·ªÿ7ß(ÿŽåßÙ©>èì”2ÿ=>Û˜Ó§Å|O�"¹|ùüyðÒ%éb_Ÿ°·W|SSqå�㌚ƩS�'N”›;¶« BC»º¤‹Ç�«NœPœ=›œ¬ºè"éîf/óÓOªÿÏ?㛌¹p¿Ö×+~)P__c#xñbs³ð‹/”·öçõëÜ¡ÆÜº¥ýÙß/¼vMøçŸð’’ÒÝÍS ˜œ,Œ‹Ãݳäb""z{µúúðKÖ^¾,]¼qŸdÌÍ›ÜSÆÜ¾-‡50€-ïÜQý¹3�ÒF TþCCšÀ»wµ/ïÞ=z u ¼wOÝ»'� ¾rE¿«¾““¹RǤ$¹ØÉ“/]££¯\Ñ<\½Ê~ðùúûñÏä=i’ê3uªâHM½ŸéáÀ@I‰â•S¿s§¦FùsW�¿œËà ”ùß| 9"üî»ûyáþ{z„TŒ�ƒ_ â&¹‰™ “ Ð=Üs�‹À m8Kzüå—qÃxtܰ1ÔŸY§þ[¶Ãæeë�÷ûïãÙ(+nÅDm�a @”‰mN_ð|^óôi\–1ð¢ŸpH%q;ÜuÁB´kúô ¸5 ä�kdîÜy@óñË ìÂõë•?ÿ‰ ׯ=ÁÿñMÿ•?JÅëgÏŽäÍ÷~ÿ]¼þö›~Ä‹±¯‚…^_àªé÷êjõÈݳf=ȹ¢‡ð°j•xà=âå~঄ú€\½> ¿¶6½ÆžÉß«;Ÿ£Þ^ÞôXèõ…§ä„?./Ç{þo ðÉÔ‡÷ˆŸY=äL}[ZFæ€gt�÷ÙÔÔ�ýàå ‡ —/Z'ýÁ<ƒô &XHììCæc´NÀCZš4�½Àëø$�¾�¦2ô’§ ÞôtÓÓ nC�ü2<à—< ˆ®Fï r¤�É�¼ÐêI/ðûø&�d†ÐG'C FçK��üÝ Ô§ìKâñxàÎ ;|Po>ðIì‹Ñûó¿x·À#1“ÿØ?üùPs<%\�ì v˾}ú½övñ~æÌH¾Ì!{}"z޹aÁ¥Kq®š#|M°Ðãƒú�žO'<½$þœ ñTT$\¼XyTU©Nä@‘½BŸ�=A Ì Ü±7@xOœÉ—›”<ËÊ„x3ö7 8>©~ ¿,¤‡áÃë jJ_PcæÃÓKâe>x¦/xöû�á–-'Ô o ÜpC h =ç jð9t‚÷Ñ /O8>wÎ<ü I\BüM07WØÔd-¾?XÈ �¯Þ}ôÑ‘¾ Îø êœ�+|wf¦â&6¸((0fÎ c^zɘ¼a—¢£x-æ†øØ+h û•¸á ¤çynyFSø.Þ‚Ý„‡f™~›ýÂßbß Ý 1€ÄÄîmlŒŠâ 65EDp_¾óŽø Vø öØXÕzÊåÇÜൠ\^cÚ4yb&>rà5øÑ¼ÏLð>ýÁëh ;›]Áo1“ šÑøx!šÅî™K¯ƒÜ'‘‘ÆäçƒöÖËþ§Êú�㟇 R1) MB ãQ“¶D¦Y‚³ ƒ@ä<ÍLC”rˆ•nXÈ@£G4õd�‘Í E2cÑ,Jê”�Q ZÿÅw¯]»w}«_îŸÞÜ÷¹9çs½ïëz¿ß—†c�úA<˜_E/@ò÷�HˆÖ詃۷ӑîaq1“‡ YYÂGZÕ󎃂˜›ˆ­Ñêb~BCµæÝq.z58Xkú¤/< cÐŽ �-õx‚‚¨Ûã7޼ ²g€ø ÇãõÞ¸!×HÈàà?ãæÍ(Œ{¸aJ… O>)|Ì�+|̘!|„…Q!uYëñH}|bmp0³íøñ脵aa胵“&1ÖN™Bž°öþûñ$�ç�ÐAk§M;}šëÈHò³µ=tû6ù×ÿ�¯¼‚“¸‡……(¹RÙÙ¸#>…Ò¡}(ºÄ$(åõN˜¢ ô‡µ&ÐÆLžŒFX;uªð‰—3}:^cmt49ÂÚØXôÐÚ¸8òµµññß|&'³3g΀ÉÉ##£1)éÏ?ÿŽ/¾¸x1çq P3´ GÃ×P}t^ø˜2EøðzQ4Î J?X{ß}^¯¼_´�ºQ kcbø&­ããÉBÔ‰÷X;o žlíüùø¤µééäc-"?³xñõëàO Pë… Ç"ü¬]‹;»‡yy$ò©mcªñé�ðpÑS¯÷Á™ Þ;úimD bmTdLl,žbmBgÌ£�ò Ö>þ8þ@�è¿1™™dc²²ÈCÆää°“—÷ûïÔ™—wï øÜsÔ­ÔòåÔí ÖÙÙàªU¤÷07—îEÓ³³ÁÄDœÇ)ð1Ô©qã ~ØÞT¾Jý23gâ�Æ$$� ŒIM¥£ŒIO'sQ?¿dÌÓO³kóüóì¦Æ¬^í÷S×K/}÷ èó]½Ê/ù|ð@‚,>Ö­,,„‡+HøîaVþ€ª‘ÛHNø$n‡â~ð‡.x<³f1ÖΚ%:‹‡Z›’BÎ2&-�N2fÉ&ϘgŸ%Gi�Ÿ�NSXˆNh½aC] ß\|ò$¸u+û&Ifh,),.ÆøÚkwï’áÙêÜÃŒ 4ÙGøˆ‰áÔä!™—°0’>~8w.ºàñ$&â›Æ¤¤�Ð7””9 GX»hŽlÌ3ψ^®\ÉŽh̺u쥤ò’Ö[¶àŸ¤$ò‰†ýJ©Ý»ñ Òw_X^Þß?ËÊÀŒŒÒR9¿;˜žN:#'³%¢ lŽd$\Ÿœ�™)~™žŽ.� Ðåäýã›Æ,X@Ö4&#C|#'GÒº¨hùrîo܈^j]RB^¤žªp×®†°²²µ¬®¾y¬ª¬¬|÷Ý[·À={À Þ|“+·05•ÔJ^b+$ç áákÖ€!!¹¹¢ Ë–‘£¬]²„y0féR郥K¥–-CQéx Pc|>”Gë­[Ù/Øä„‡Š t‚MçÐ!ðƒd> ¸r¬«ÃG•úè#ááý÷Gã¼yÒÇîJ Ûº §‹ŠBÍÈý2/^oQ‘ä…+Ðcrse òóÅ/W®¤¬-(�úW¯&™³f þiÌ«¯²_h½};�¦uEŶm|ó¾}r‚ººƒÁÏ>#O(uìØÅ‹CùR©£GñS¶Ñînððaø™3GºÉ-LJ"å)5{¶ôGd¤ôÇĉrÚààÍ›%/ø|ìZ¯ÏN¡õË/“ŸÐ;ö ­7nÄY@ö,­7m"™ ì�ZïÜYPÀ7îÝK¾Têã�Ù•ª¯ÿäðøqr¶RÍÍäPúáĉž ðë¯Á””šy�îB‚ðñÈ#‚S§ /¡¡o¿-º°cóLi©$ÊmÛ$a¾ñþ€Î±Oh]V†³h]^.º°c‰„úÉSè€8õ¾}ø†RŸ~úÖ[R?ùR©Ó§¿ú üár•RmmÒß/ØÚØÒ?IIâ6na|¼àŒ~†‡×Ö‚ãÇïß/>YU…?h½g{:ÅîÉ{%7¡g²±UWKB¯©‘DVS#õïß/õ×ÖŠ#ÕÖâô;¾¡TSÓ{ïIýÂÃùóì]Jýò‹Ô}îœôÅO? ¶·s?1QÔÕ-Œ�%Õ‘êëÁI“ü~úÀë=r„y0æÐ!tQëƒe󪫓>8|˜>PÊï—Dî÷K⨯—úëë¥þ/¾(Ø7ékq¨ÆÆ²2ðÔ©½{¥ÎÏ?;;ÛÚÀË—E;;…� „‡Ž0>þÛoå=ºƒÑÑÍÍD„hhs3<75É<45¡‹Ì+z ÔÉ“’´[Z$aµ¶’”:sFê?{ö…À¶6|ܲüñG©»½}÷nyÏäkÞ;û} ¾qíÚ™3o/9[©žžÞ^°«Kx¹v Œ‰ioçÊ-œ>]ºpòd¹ ùùgñ‡óçÑ­;:È‹J]¼ ^º”œ ^¹" ëêUÙTººäýww‹óöôˆÒÞ¸±k— ÔÝÛ+ºxûö±càÀ€øåà {†Rwïvt€ÃÃ]]ò¹ÔçŽðÑ×ΜùÛo\¹…Ó¦ NœØÝ-ópýºøÃÍ›¢‹·n‰.öõá Jõ÷ËFzçÎÂ…r-}Ðß/< H ˆ/ –—ƒCC••‚’#‡†Ž ïÝûòKAéÐáá³gÁ‘ÑË‘áedäÒ¥?pL~wI ø 5¤¦Å,§¥)EúfO¦ÉìŒ$ï�;•"i±+ ÿ¸Ì믓ú”Â�™yò=€3UW“ˆPM’j‰b�Šp ¥Pixàæ‚ûôÏá£\wvâ6t¤ü}é6òîÈ?ôôc�ý�fœþËÓ¦¼s’õó7ˆpþäï£G¥¾ÆÆ@ý õsŸ=ƒºÉSpHÝ|� Ï\“úÈ+n#uÒï¤Húœ?_xÀÆò@­øÀh øŒ>@H#p@P£ÓÔ‡.pe ]?ïÝyßc릷@~$íáWn#}Ï|Ðìð€ül&ï¼Ãæ$çgVà�wéôuâÌð@ntê§?¨¾¨olÝ|Hæº|9€œ;.Î}D'ÈCÔ„®_¯¹èßxoâ¼|Æù™•Ñó@Íô�ߘ¸ijÞN�’ÿ;wN®ÿ_Ý|'Çôý‹N ½³g»�Î|8:èˆ/p?33À½�¦2ô’£ Î<€¼s>'±Ò7ÌÿG^¢¾'äÇvê&cŽ®·´T°¤„%œƒ$Döe55Ìb†QZý„Þ Âê…�?Ô ý%%ExDÎÈC®øŽú˜èUÑ ~O@ôD?y&ý…ç¡#¼ älèÁì‰zjks8˜½�jm·K"Ô=ù öÑQöÉ:)Iô„ýÄÅÉÿôéò]B‚p‹˜AΖïàÏ öY£9|�Æð{¼kjÒé”üŒ#5 FDwá0=iÿ~vé}ܵËáК:ÐÚ×Wj¾P¼›8‚ƒ%ÞÐPæTÙ[X˜|ž2Erýð‰ëœ=×™OXÃ}Öp�·¢5<“¼€ô™É“E«E«&M’^ðïØÔÄix·og§Ô°¿¿RpÜ4}|$‰•=��}²göÉ/Á˜ùþ€Ô¿ÅŸ; ¢<� ; JÑcìv¥ˆÛÏO)ü'ˆ¯GîÃwŽ#×ïÞ¥ÿòïcEQÀNŒ½ <ðñ�7¡¡JÙl†ÁgêÆ² ƒ<ÙlJq–|¦†-KkøaY¦‰¦™¦e¡…–åãCoôñ±ÙÐbËòõEÿ,Ën¿s‡ëvûçŸËõï¾ Žï¿ÿï¸eKz:ûó6–•Á|Î ÆFzºä#&†“ç¼‰Øø×ŸzöG.¨j 6ñKÓô÷G/µ D'M3$„žkYaaè£i†‡Ó',kêTæ-Ër:é—¦õå— ÓùÛoÿ++©Tïca¡ä#;ê#)Iø)ù°Û©pâSŠz ~j€¸á ñ¢�ÄI/!.ÉC\ :Y3fÀwÓLLD÷-+%?iš©©ø"ÓÌÈ ´NOÿå—‰˜–öë¯ÿ‰k×Òµ½�n7 =�|¸\’�ÐPዯ/Ѳ›M¸áë+õ/L3"Bê!.]0M— O õÌ™åå¬gÏFßLsþüS§À¬¬Ë—Áüü‘î[´èÑ#ÁŸþ •ÊÏGò2Ž«WÃ6ïcA†¶áè’�àºõ /8‰? ½Ð:, ½Ô:6– â¼ñ¦™–F¯ÑzÞ�~K äåÑ/@ú§Ö«VÁ(¥+Ƀֵµ’‡ææ�;¹îñ0'0}vwƒgÏâ £»ûÝwÁóçŸ<Ï�ìêzúììd�–†ñ>&%íÞ FE±KôQêÃn¯¨z@ÿÔz˦uC}C©ÖÖÚZ‰_ÞÜÙÙÕöõ‰N<ÿü½{à•+øÃ¸|ùñc°¿Ö}}jê¡C¬¼� ­­D„ÔG`c£ðaÇé“ÕÕÌè NBë ¤6mŠ�«ª’“¹oóf‰¿º}0ÍmÛP\¥š›KKyò¡CÛ·ƒ òƾ>ÉË/^½  I †‡%þ¡!ÁÁAÁ�ò�’‚ºxããéêè‚äÅßßã‘<´´ˆOhldâPjÇæM¥vª¯w¹À†æL¥ššÄ¡îÚ•—'ñã§”Ú»—y~ 󺻙¯8÷³gÁW^yáðæMÑËÑQ9ÿ›7¿ú |í5Y߸A>’’„MÞÆØØÎN0$„nÆ|×Ö†.øø ?”:|]4Œ½{#"@�‡:0ŒˆŸsF¹�ø ãÈ·”<üè#0&æ�7Xyƒƒ Ž‘áÃȺ¨Ôè¨ÔÁè¨L\·o㛩Ox@½Šãº_ Ç;ïTTȹɹ��Iüz<àûïKÜ|ÀÜIܽ½rþ¢� ÉÉ<}úàøÍ7Ò/Ÿ<|üü $Ïv¦µ6-Ëò±Ùl¾v»Ýáççç0)00prPPPpHHHhXXØ”ðððˆÈÈÈ©N§3:::&666nÚ´iñÓ§OŸár¹“’““g¦¤¤ÌJMMMKOOϘ={öœÌÌ̹óæÍ›¿Á‚?dee-ÌÎÎÎÉÍÍÍËÏÏÿã¢E‹üÉív/Yºté²åË—¯(,,\¹jÕª¢âââ’ÒÒÒÕeee^³fÍÚuëÖ•¯¿~ÃÆ�+++7UUUm®®®Þ²uëÖ¿lÛ¶m{MMÍ_kkkwÔÕÕí¬æ2›ššv577ïniiùÛž={öîÛ·okkëß= Ï�ƒ :|øÐ‘#G� ;vü؉'O´µ�jko?Ý~æÌÙ3 ÿèxæE:Ï�ë>×Ós¾çÂ…²_ç0M†q ÇKÔc 4qp0ÑÁÁÁ�b ŒapppPÁ„�#Þ÷-G)唣ÜW)J�Rn(¥œ(P (¥ÜåôOâbt7‘ïg{·çÉ÷—¼ï «TÆ+Uª•Z�¨NJJNJIIM‘wtZzzFzffV¦F“­ÉÉÉÍÉËËÏ+((,(.))-ÑjË´:]¹N¯¯ÐWVVU Õ£±ÆX[[Wk2Õ›ÌæsccScssKskk[k{{G»ÅÒi±Z»¬6[·­§§·§¯¯¿Oú Øíƒö¡¡á!‡cÄ1::6êtŽ;]® ×ääÔ¤Û=íž™™�ñxæ<^ï¼×ç[ð-..-./¯,¯®®­úýëþ��Í�­­í­� úÓŸþô§ÿÞìy'h÷+€ü¤(~÷·çËÊAÉaÈÇ1ÆqÅ Ä)ÃiÂÁY@�Ä?/á/Hô‹{þâå«×oÞ¾{ÿáã§Ï_¾~‹Ž‰�SƫԉIÉ)©ißÓ32³4Ù9¹yù…EÅ%¥Ú2]¹¾¢²ÊPm¬©­3Õ››š[ZÛÚ;,�Ö.[wOo_ÿÀûàаcdtÌ9œrOÏÌzæ¼ó¾…Å¥å•Õ5ÿúÆæÖö÷ßã÷ðÿØùÇäaQ‘»G ìPŽö+Î……ß¿ý넇‚ÃîÞ ?y)$ôê§û)À£ù­ endstream endobj 612 0 obj << /Type /Font /Subtype /Type1 /FirstChar 1 /LastChar 86 /Widths [ 259 759 537 537 481 389 278 519 556 333 840 519 519 519 685 463 444 556 685 444 370 259 259 667 278 463 537 537 370 630 389 833 444 537 833 556 870 704 667 500 315 833 259 519 519 537 1000 648 500 519 519 519 315 315 778 611 833 444 519 259 519 574 963 833 370 1093 704 259 852 741 759 870 370 574 444 444 259 500 426 296 944 685 463 278 519 722 ] /Encoding 629 0 R /BaseFont /BLOKMH+JansonText-Roman /FontDescriptor 619 0 R /ToUnicode 630 0 R endobj 613 0 obj << /Type /Font /Subtype /Type1 /FirstChar 32 /LastChar 181 /Widths [ 278 333 500 556 556 944 870 278 352 352 426 556 278 426 278 370 556 556 556 556 556 556 556 556 556 556 278 278 556 556 556 444 800 704 704 796 852 704 630 852 889 407 407 796 685 1000 870 870 667 870 741 574 815 833 759 1074 759 685 741 352 556 352 556 500 278 481 574 481 574 519 333 556 593 296 296 556 296 889 593 574 574 574 426 426 370 593 481 759 481 500 500 352 556 352 556 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278 556 556 278 278 278 278 278 827 278 278 278 278 278 278 278 556 278 278 278 556 ] /Encoding /WinAnsiEncoding /BaseFont /BLOLEN+JansonText-Bold /FontDescriptor 621 0 R endobj 614 0 obj << /Type /Font /Subtype /Type1 /FirstChar 1 /LastChar 62 /Widths [ 259 611 370 463 407 519 278 463 630 481 444 296 556 481 259 611 519 463 593 389 315 1000 315 667 778 426 259 500 667 481 407 1000 296 463 389 259 259 259 685 463 685 519 519 944 778 630 593 537 759 352 667 778 667 778 481 537 796 648 278 741 741 333 ] /Encoding 631 0 R /BaseFont /BLOLJB+JansonText-Italic /FontDescriptor 623 0 R /ToUnicode 632 0 R endobj 615 0 obj << /Type /Font /Subtype /Type1 /FirstChar 32 /LastChar 181 /Widths [ 300 320 520 600 600 740 780 300 380 380 600 600 300 260 300 600 600 600 600 600 600 600 600 600 600 600 300 300 600 600 600 600 800 660 660 660 680 600 540 680 680 320 440 660 540 860 700 700 660 700 680 660 500 660 660 920 700 660 600 380 600 380 600 500 400 580 580 540 580 600 400 580 580 300 300 580 300 860 580 600 580 580 380 540 400 600 540 780 580 540 500 380 300 380 600 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 600 600 300 300 300 300 300 740 300 300 300 300 300 300 300 600 300 300 300 600 ] /Encoding /WinAnsiEncoding /BaseFont /BLOMHL+FranklinGothic-Heavy /FontDescriptor 625 0 R endobj 616 0 obj << /Type /Font /Subtype /Type1 /FirstChar 1 /LastChar 2 /Widths [ 278 761 ] /Encoding 633 0 R /BaseFont /BLONNK+ZapfDingbats /FontDescriptor 627 0 R /ToUnicode 634 0 R endobj 617 0 obj << /Type /FontDescriptor /Ascent 710 /CapHeight 650 /Descent -239 /Flags 34 /FontBBox [ -141 -250 1104 812 ] /FontName /BLOHJF+Minion-Regular /ItalicAngle 0 /StemV 78 /XHeight 434 /CharSet (/registered) /FontFile3 618 0 R endobj 618 0 obj << /Filter /FlateDecode /Length 454 /Subtype /Type1C >> stream H‰bdabddsòñ÷ðrÓöÍÌËÌÏÓ JM/ÍI,ɘþ`ø!ÍøC†é‡,ó –rý‘ÏcùzÏð7óof EE‹'ß™¿3ß{úUþ7³£hûêÎÞ®,©ßZìÙ�]™ rUß9ç°gõuö¬–::‡}rGãDù ÍóÒ§WÌ)œÛ´¦k~—DÏŒŽž¶Þæ‰Í“%›'7M¯›W?·fYËtŽUS/^%u(zz¢\Z¦è¬îž%óWÕ%6v””×Éu´6´×I—°ñÕÌú)7ëwõÂï6sÏw³}ožô§f:»ÜçJñÿ<œ¹nrZŒÎ® endstream endobj 619 0 obj << /Type /FontDescriptor /Ascent 759 /CapHeight 713 /Descent -262 /Flags 6 /FontBBox [ -159 -272 1136 981 ] /FontName /BLOKMH+JansonText-Roman /ItalicAngle 0 /StemV 81 /XHeight 433 /CharSet (/space/r/zero/C/one/s/exclam/D/two/t/a/G/three/u/quotedblright/I/A/under\ score/four/v/quotedblleft/J/dollar/eacute/five/E/w/dotlessi/L/percent/em\ dash/six/y/b/M/z/seven/n/O/c/quoteright/eight/K/Q/e/x/parenleft/nine/R/f\ /fi/colon/F/S/parenright/N/h/fl/l/acute/semicolon/U/i/endash/idieresis/V\ /j/P/k/W/copyright/comma/m/X/H/hyphen/o/Y/question/period/T/p/Z/slash/g/\ q/dieresis/d/B) /FontFile3 620 0 R endobj 620 0 obj << /Filter /FlateDecode /Length 9343 /Subtype /Type1C >> stream H‰œT{PT׿ËÂ.\‘ëE½‹ç^QPˆ¦•ÂF1b|!šøˆmVY…å±Ëkä½ï½ìƒ]«¨dQÑÀƒª˜ÄwŠø¾1¢©Ú˜šN›ï’cÚ^cg:�þÓé™9ßœ9çw~çû}óûŽˆðõ!D"Ѥ„å«Þ]±tÖ²4µ&G½VY¬�Z““�¦~y6ŸGðrêÃOó}y‚‰[C}sTù¡&Ñ?8îU ”Âîqpu|[èì†ÂW$Òš]-‰9¹%ùªm™Z6<=‚}+fÁvQFÎ%›Z¢Ñ³5l²:='?7'?M«Ìx“eee±k^â5ì¥F™(ì¾J‹}™«Ò°i¬6?-C™�–¿ƒÍÙÊ.W©s´%¹J61';7M]òðèhög%ÿVö ‘ “ðþ>ÄØqKa“‰x‚H˜L¨"Û‡("ˆ2‚Є… j ¢� .Äu‚Ø(ÔŽ~„J4_Ôë3Ægµ�ÇçŽÏßÅŒ8F-¾ïãÛîä×$‘J6I¥ i©ô�ÿŽ€à€¾1ñc_;¨ ËÈ&Èì²ÆA�y¼-8%¸�T�}8j u"deÈ7‡' On¤Sè­ô÷òyè¦Ðž))ˆBg™æ0kcÿ:õFXIØ©i+§]�¾bú¥3gô¼®xcJ¸$¼9baÄàÌÆY¥³œ‘Ó"—E#wGòQµQ»¢šY+KÒ�/0üv®(ø ø†}¿ä0ÙJ.…°Ÿ $î4 H‘­Ùµ¿…nÉãrµyœVëÒØ �K=xN �綸Ýn›‘ÎD¬.e ôz5utd7ååef"Y™· ±í÷ŸzÖOÉNh‚Ô©õÊ÷éxåÚ8 ÊäÔæ¦uŸÐdÈ¥ãÇ¿ð ÒÞá ¾µ �?ð¶¿ÁæàäNa0®ÒÛ·i·Ùi@:]I¥vËÞù›³�®3p¥Uzn'úµ�/M}+’»¢ßUŸ‘FÕ~0PpÓ¿Þbµëä:‹Ž!]é.ÕTÕX]VÒdè>¯»"·Z·Óº:®n—£¦ �”ÂØúÇ!”’í ;‹dü ƒ:¼]/Cð½Ø80ü2L$ã †ŸB-ÔŸ>Ý2|óf˼ÔTí¼…Œ5�zrãòÈHüåÙ³Þ |ç Cžæçágÿ#Vÿbj-Å'Œ.ñ[+y‘ðÓ?ËÇÃïq aë>'W“M�jŸëÙñ;äq8ëä N£Ë.ؤ-bŒF¥Ú@ëíF—í1w nhð?v¬¹ï}RsrM–:o’­Ç[Œ C04PpVrÓõÆÚŠê�Åi›‘N�ãñ\¬§± „…[�Òz‹…GœµõŒ h8Gÿ‹Rš�ŸÃ«0ÒÑUNƒ§®q_wr» 悞�æê\¨{óÎ}Õò £¡ŠÑ�€Æ k€Lÿ\|æ3êÿÔÓûE=pR C°‘ÚÍÕ6·qõå n•TpEåÅ\ùo«™è ÈÄ�ú™P¹�óÛ¶mËÏU©Zs;;[Û>F2GSáè†"‘žŸ þ2F7ø%IðÀOüdÕÞÑ×½¢®ÇÐðX žj }�á¾ Ò±+Öãt ŽÃûp:(¯/2Bã¥KNh„ râÆäd#nÄAŒÌrˆðÞAÖC1œæÍÔƒÚûûi—•+Õ™ Ú’ñþÎÔ“ÕdÑÑ{uÝ]§a=zÁJ§×”%«Í¦2£M߀ö5ÙîJ ŒnB2¨¾Ì3E¢ÆÓP:$æ)þOT‡íH]_ÊUŠ8³ ­Wâ½øN­á,õÚu:ÎèF$õ¶Ú= ”A€ÔYcw1=Ò:Î^‰°¯Ÿ 1VÚë:Gf�¿ÈàÛRAcèWbî ( †kƒd+ÅóøÑM÷^í4;Œ•¦2³‘‹Ë+ªÍ[ådë†d§J^i°,Ìv6 ÏÔÑ:›ÙnAY7¢ÇSéä ,ŽE™‡²å嚬^]3ˆá aþÂĘ ŠíÄA½þäÛn³í•“Ó[>5õ ´»÷q®NF°?Ñ+ê~µO›í¡ÂòG l øüm7‰‹Í�ˆ@ k) bŸà�ò ÕeEÈb†·Av£gïä=.›ÝM; &“B£þáRc ÑÄU›¨Ñ(F¢"¨°¼”}ÍîììÌìì웇Ä¥¨!>Z㣾8>¢5$æØà«§ñÄX¢ÑzwûyNû�M ÉIÛsúïœù¾{ïïþîïþ¾1½gŸ¿ºÉýã±xàè¬ ±4šwy»wg3ûõ²¨C �¥À@œÌJë5wwà¨I?ÏÁ±Ú¤|ˆi»Tqw:å#f@Ü O!ú ª ›;D¤ºá7 ùµYy2²jr•¢ñ½Sä:�Ÿ�èÌ#ÔUÊù_« •vô� žp’ìöÏaèÛ ]‡è)õ0±žúŒ$¾@#Hæ€Ç)«Ç”Úr7pµˆMžXU>Ôx“¦´­µ´•=º:M »Yj00ÔOKy» Q«žwÖ¸�æK½¬Ûxõ3ô[Ƶ)ì”~ù¼XŠ([>ñHG„ 7†¦DêË£ ~«É²ˆ¾²›På" ùnîÈÔT¬8‰µéâRëÊ‚|Áh”Ì ërÉÀÀ·7^ë‘ý.·ÎaJ×ùÞ/A3žZÆUÆú­6‘l^4ÙÐSØú^;ψM/-Ô2¹F«Y/[�Èí•[ .@dÒ++~/¯Ì†à'êp4»dŸ3ës¼–8‹I4û>©4ÿJBÒÛÒÆétû ¬‰¹¬ØOA’Ь¿dþ82¶‰›ëÂÓêtƒš!Ž:ž ¯jðcó“WLep"~î ¬ÆÀOÈÎÑœš�ûìGf‡St0.¿âAÔ8wá—-dÃË»™+.b¨ó¹ëšðK ^€_Ã#±Ó·q&Tƒ¶­ºâ ò:ÕSUÖî§}VÉÂy›U@ ¡ÕB0<,I M‚¸(0„îi΋jûésæÕGÙÙŸNVðøX¼Æ‰G‘ œÉ¤fûwОù ´¼â¥=@ƒ¸¦:‚ZÂ�C‡�¡óg>ø úÐ7Ò!nÌ)öؤi^ü4“½Œ_8kB/P%À'bNáäîì‡=\^C}úv°ïŒr¨Éh± <�ruóF¿ËS¿UúÀ|­1·¶XpD9â$™÷3ªKr¡ZëΓ4<£r’ž³‰/ëYƒ^Ž#˜™ý¨d(Ž9ª¾W#±T@nº¦Ô1ÁmÆÒrôòŸÝë�©jÑþõ7�[(ó˜<‚Çý»%ý§.Ic!æß,z‘Ñu„�R ¤<„³]K‚D(Öôzj$+®xíQñøDõÝ-•ÛíHøÈ~f?}÷?/Õð]©OþÏ¥Î�ýî †’ê:hJkÏú ÷½¢™óºadÞ²Ô^q=uJ J Ö)Œ×G6x…Õô^ÑÁ±æ’BŽ±šœ² }øÚ •˜êÉ^Dµ“¨ý#ª ëßíÏQêiHŒ©wU^ªUŸGTÌ,Ç xNÎåâµE"o2ˆ––Ks>¡·Dp?×Á¦N†m� €¯5¸o·nu£o2ŒìÀÝý÷œ®¸ƒsAÚ&5³'¤J÷uÆ'º=ªàâ�y?+µ+\×»À5üÁÐ;½ K�ÕÛ± „ê1aɽ¦Ø7&Aù×H3íQDu»SìGßÓ{©½çœ àtx<ìê; ·kðÈÌ‚,vͺco1+ ׿®©Î<\S嬨…à±ã÷8 Ð%Îé.qR(N(‘´õH-´=ŒJ€hJËL.'­¨N·‡…7aß‹ßs~Äø%Åç¶»L¨ •Ï+¯#7Ú ¼Q²ú>–ô›6c6£|�IÌ&êÿרö˜š2Q_Žêóáq<äÂ/b›EŽ£�œ!?ÍhµêE³›U] t|ÚÓq…¬�g�+;k�xôMD$Rh ùç =Îë–BˆñûâµNEV%Ty»ªÍI{G¡Iæt ‹µKS¬x,35ǹûGu~rÚºuä+!X¨ �=Aô­† ¯ íøœÚÊ�Qš¢÷ñðÐ(Ýg±»�6Ÿ�)]Á¯7L,Œ×šÎ̉fÃïmV7òse«,´Î–c\Æ®ÅIñð\œ;ßÀ•f0K¬6c‰ÃîF�/¿õ’õï [Õuò†Ì⊆oÎÂÕÊVD½£Àóæ¼R�^­’M& ún4¡³0ñdµÂÜX¹å"ÏÜr|q.½Ò^òË·Ž3‰äéåVeb‘ϯU�r¹Ë®.XŸØiGFh4Àª3t³Ïõ·¢x.ˆ™—íh�Û\:¤}âÒ˪�z$£N�=ó#ÓqÒ ¬¥Ö®f¼š&¬7Õ•Õ¿“xwÄ'ºÓP ÔOO¿ó]ò;]M–ÐNéÖ2Ê6e嶺]ŠK«;íè¾FßS§öìš:Ü<ó8ÖÜG;jsoý_ßyrrÑ‚Êáù€ziIBŠq€ðr’ƒ¢é#–‚œeñI8ùO3’’no˜�ê’Î=‹‰y™ëFêõ,5 ‘Êö±ê’c‡Óõ9ì±BÒæüÊ¢-Xǰ ÁI¬Ï'ò²H]9Ñwí 1cZüüŸÑó‰¯åêщšsJ œ‚„1÷{÷|Ô‹ ¢àRHÙ' ‚%¼Æ «Ktlw�#R»4<áe3YÝ2¢ˆ<ýkäÏ u §@hañ=¨¡Àª�7+ù½nŸx”õX)Ó vïªUÅñI Ç•Øx‡÷—F „34ØöûUꟿAÝd Ø[Ë"1Eçb¡OmIš ™’Õ}0Æ^E½O/û— AÛß.¿Kzù�àwŠ6´›s8yÄt .“jè�Óùuë>Çd½‹ðü…æbŽ´,‡ì3‡ÑðvL'Zê…Û~·èò¢…2$³[>Ù’ŸdQ ™æéÐ ‰É«FÕyÿÁßâlmÂpîÈ¥œSâÆFë¢3ôW"J™DÊ<÷ž¡-E.DÊÞ@XA=0é)~‡´ÿjéæ»IÅ£xe—¯¬bóö¬PA°´Ã ðM ǘmùæ]öB„—�GáfI³µgŒhübèo Z�ñW||@ñqMef°™éé™ñINÎæ°{hùv¿¨•fzÌV™�ynÅÉ…+�¤ÝÜãw£Oà:]ßH4Ôðü¼W’ù€�:0�Ý“µXÅeuòF‹‡•o…øMGQÅšz<Æ?OÑn º‹ÊÉ�ò‰ªCD_–9ûð¾Øh5¤é±Öp™ðÈ�£aª±ãî“[´ÍMe¥÷Hï}¼bE¶÷Øí{—©ófíN³Qâ#Í!C^1�pZ˜š³Ž&¹‚N æ C½- êÃüÉ£"_&;½v»qwZ¶: M;Ü´|¡�Û�z—À8œa"[‰ÝFáÕØ1zg^¾=‹,r²f†§}H–=Ï@ï‘R(qçþ…¯óG~…”Ú —ùÊDR”ì% úÏîØXD 2Ófû6Þfä ~ʧ柦xÕÓ>Þö tÛ³ ”6µÀÃùÿ•ÙvU pF/Ñ¢%‹x m^òˆU |�£[+Âǯ‘Í¢ôr²mYÏæ›æZU}9X«ÙÃuu(†Ãk ~g /Z í¤ƒóE ¿ùÅ'BDmXi«u Áë1PY›Ù¬­[·ªµ»ÌF�«>RåÜSŽ^¬±ûÚ‡'FzŠK‡St��£ZPš�ï�nœ×Eó ɰ ãBæ[ðt;Áx�²¢ˆ^‘ÒUÕž¯?w¥V[qª�‡Ùäe�·šVÌÅk±ˆðrl]›V@Ø$¾’êV<ß‘ÑcYò ¨òlû3‘� kp±…FJ·xyÊöÃŒVW娀S]ÛI£‰÷náW˜ŠÎÝV—ƒ¥?Ò>­×"oS£šÀãûbø(åðx}ßs.]“0êÚÚ å…@t'vÔ} Õƒú#i»¶ï675µKGª¨²ÊÍ) ƒÔÕVÿèýÚÜCWÆbjqºÝ¼Ÿ°x´túêTµ¹´”ð÷¶ÕÔ^ÝÿWj–oú|ßÖMDnÅú�®†ÏjWÀ[ÐF¶Þè "Ý2‘VŠ6‘V»X¦R]dª3”E?Œ@q]wQâ¿ÁúD× àŸúþÎö¿AB¿³tv1ŽÁ£æïí{HE3ã¢ä Ù)±”.Õšù•é3ĸ8†&ì+»¨ ŒPòñ<¬Éw– f¤x—ºëH]}¤÷ËoÉÇ�pÁB<úúŒ´½ÆÜ�HÅq1šXÓ6 =]z<³ÆŸ†¹½×Ï)¹F¥Dëq²½¬)Xuàþ-ÖÏø'ãi�æ‚öæåúËgG´"‰„R¢p k³ÓTqQöH ¨Ok�&×AÉÈú†€ Ô Fãà ýÊõ…9yx´{ûkàmóƒoM5Þ\,ä�ïàø�ñ§øÍÛ€ñgÏT6Ÿ@Å\‰‰°‹œâ-WŸR’Ûã• …¹�|¸žŠ5ÿdú«€$<5õÃÁL¨‹é€ �b£3¡FïÝþˆY­GŸMOIÁÓðԿφi0¹Æ ‚ÍgWßC;ÐæÜ/M›I “úó�°;,„‘ Ñã�\½G¶œ/^°ßä(B ›œu0Ü“úñæžDˆ ÔÝ€ÔèD}Í?…¬’ƒ¡RNM><ôcúý/ùl¦rJ7¡¨¶šó‘‘òÊF¤ûãY¯Öà¨Ê3L f#�›u+ž#ßÁØN¹"¤ˆ ÍÈU,‘K2@â&Ù¤ÙÍeCn{9÷=»çì9{K6›dsÙÜî·T¡E¦#µîPA†"«ú-sfÚ~ÑR Ôétæü<ïyŸó=Ïû=ÏÛ—ë3n/fÙ Ž3lžkSŸæ6Ûømöä†bIL;Rëp4©I0[µW“M’Ù ©øME3¡e9ÔØöz‡šƒaëÀ²²2Ö¸…ÐŽ³¶ço¬ß‚?;{É´åÍo +$´/�Ùl(0®Â ºŒ= ÐŽ,Ú²uÓæØÂ÷+OÁTøÂ>8µÛyQ�´S¬ àXÝó‰ãNø£Ëž¬Î¦Õ„yhF^ç#1ÍQh� érnCB‚/£¼}'ñWïE¼úWM6¯Žgí †¶ÛÊñ\�®xÚCø"®/>ªC™Ñ'(r€wgñ�eÕ�t¸y/~ݳ-š|;»ÁÂÛÆIÉÀÓàŠ¾ÓØ�’^�jA¶­àšB¤4ð©ÉJÞÊ,“~ H�…àž½™vþú3Qíi˜ß»¨¥ÆXª&�Û0�¬Ç†¯Y�©4ËøÅ·Qÿ…2ˆPžºeÙ(Jmö–˜¢KÓÝéè(‹–‹‹[ŒÝDêDU¿õ�o»_óë¦ K–¯UUÅ–ZŒq×ú8¦ÁñžÕüXÚì]°ã®� †$¿8�@ùb´Wÿƒ•÷aEÅ… ¤Xìëxg¿ý_Ïw¬8ÙljüR:ÔIÀÇ4¾:‡ÅGÀUBù„,äX;ÏPV�óïWo´)Œ|Rt»ë8Ѻ†÷o%TŸ¥Ð]!ûü0åÜÓêöàÝ.¡ÁÇ,QXÀòòóPxBSnuòþž0oª¶ÔR41sÎVC çñÖ”ø¯]‹jaÆ÷~íôÿIp(7#½4Ù�·j[~A&ØmÒ·½‰¯y:Ël$vºZdÌãù{jèÛN�ÝDø2üZ�Žôý/3–ùF š±}ÿµéýæé.$Ä}86™àØy%16éž3««ãÔ‡ÌðáÏ®õÁ1×oô©c&ƒ+?�ø¤«¦>>éçêxõñ+“à8ñòŸàD”6^TƒhŠÄžÜ™0jå‰ÈÉÏévÿvh? Û¦>1ûEãäis {ûÁ|µU·ÑA®¤9ÞBºXèÛ)î<¶;ž�nŸ,¸™ù‚÷—jìýuäI N;�:À±+à8õÑU9E†,¢Ml­ÃdÁÃŽ ÜÅòœP–öÅ&#Ó»«¢‰±±»PÌáªÔ'LðÇð�7 M C(˜¢êq |Çýç]X€sÑ€©,¯,Í/ÄÕ s'ª�-m™ÿÞ‚òu˜¢xÿé€'L¨ë4Yf>çàHÆi“€·Nüà£ÁÄšÓƒXs3¯ ¾CDj+º$~w3ãÃø �¾ö±nÔ³,·=«é®ž¥�ÍÚR< ³½#µ¯uË9»ñ®æ¶È`Q“~mæúÜ|∻=€I¼|ÙI£”¦ ��ÓÎlíÑ~gÀ]™ß Àõu Zý™HF¥BÔºÑ6…»e—\g�ð» ™oª�“]íÙ„ÏÍÈœGhÕtXÚŸŽéÍ™‹ÁTú«œ­h[Ë,§�àEÎv q8!rá0�ú<ø —> RókÀ¯ãs�¡(pîà¿E�éI#b¨óÒm®X»d:XÍb´È‰8I’P4&aÔ\ÚsP–) ØÂÒ<|�}èæ¨Ä’».FáúhâE8]÷Tç²/‡Þm _eÁ(7¥Èm�úÏ@/ÊL2ÖÄ , ÊØÜ$„øšæŒ÷÷æc$;09ÓJK Z¦ÚV/4¾í£3 t5–—½õRÉÞ�lÕÔó jr8}~Z.¬‚…DozЛ³ˆÔ«Jyl^$¡2F'Bñº®|ß|¸¹ wŠÝ²¤¸Ý/Ì xs�¡ %�fXªFâ:‚>Gµ�¨7ŸS¹�EÉÙÍJ¹÷xt�{NÀ´�󽟈Ò1o²×é”dÌÍ+(ä‹~m(NI·³CŠ´ |[(®Ê™¦æ #¹Tùi Gyk«¶þ †tÚpÏÌöYݳ“)—D!–%Q _ào°§õHò»Ù{÷K 9þuÂ^Áç®®®��%m2§8ˆN:ÈVc9Ï«Ëß|’}ïB¨ïÀ$Ñ¿Çïü¦Ž)w£¥âv•È6…dŽ ouxùv¬zž:Ý^‚ɾgikÝ/xî<Ûè‰� ˤÆ& JW�ÚîеqÝEµyV¿XÀƒJßç{PNG·¨6’ÀÄÖ'B uËn²>æu¹:=Àãt‰ †tL‚ÂB¾ÐP@ ËÐv'í�Ô1–!ÓõöÏÕ™É"%r,†:Ú¬ñ\ý&’L¶X�´ ·Òœ�q’n"ÜáŠtuu¥¤Ë’¤ø9ÉbË“‚áöþSŸ#SýeÕØöbÎN¢q³‰œôôºzv ¤¤+’à— ~·´®ÁF<VÑ…¼Ò…´þ‚ÖXéˆ?YtzE/&±^Xkø,½©)Ž£)ÖeW” ‚Ñ´ú•ª2D¤Fm‡ÆNF̱ÆD(ŸÑ-…/ä}ykÙ+ÈŠèq+¼l…z>ï­å–Q=‘”‹RꆅR/þUÓì“ÿ¤½|C›8ã8Nms&¾8lÎ(äØ Š£ƒ9ÃüóFjÙ l86µ#ÕóÂjIÓÔæìÎIÛ¤Mz^zisµiÒÄÚ¥©�É®l‘¨««›–Qöb0·I™oöbc®ºÁø]}Šì¹”Í Â¨ ìÒn&á©8�g­OE8Ýî÷…p拇szçÐ&<)£.Ô� ðÆIu=•:cúê×ûPÔŸÒÏ5ª„Í]þÎfÕ–-ZA;ª%ÙÏR«e¿/ÖJûE¿ÈR aÑlnïå^¹óB”¹;|3x‹N&::FÙ­ÈhA¦wQ±^°Ì˜s ¸ˆÛã�>º/õ‡ØØÌ¹[!�€Ú=�÷0Žýºc¼[:@“?ª¥@˜’xã•é Ø0­mÔ%ªL5ª§pŒKy”jÆsZh£=ÒPOfpù+Ëi T¢ì2§ºŽÔz½z»½Ñú�¹r¨òËÏÓà C•#ëÚ—=LÂìO3îqHåÉlsÔ5 G´ ^'&¼VvyÙœÿ%>¯íè>íºC€õû�‹ÌX|ï·têüÙ‘Ëå½ èíõh ie©òçIýÿ$•ø×•—HêY'1âõðÿ´¢,µbCb9±O ­¤tÏúð ;Ùæ¨÷ã­ur�§@öŨ,ÿî`ƒ‡&d{ž=‹—Ø3”FÃlg@‡=Øs’‚½Hs‹"‘Ô”Á8¦§µ�ÚO‰Yx Ëéx´v‚‰†ä0 •8]ãÁ<+Õ®vœC„3 L¿8©D"úlvðÆŒù:w}¯ÓÕPËP‰¼œ^î0iYøˆ/ø¸Â?Ô” ¶<ÔEΜ‹ÇÍIoŒká›Û| AoèH‹š�7¶Î?š_GÝœT³šö“ÉÔ0—lpsœ»!É ã®}Áwr�̿ē&í%ž4åž6 üŽ�ü•à ©ºšs�ÁŸ(Êdg�˜A�ͳ…êwjÚ¯ªŒ ÑêmÛQ1Zu߆{ß(Ø&�Ý¡OŽº¬t‰í—Ÿ\zðÛì¨}O7«hëâ{&¨Á¤·ƒ áŠjðÚ�v";: ¸B ›+Beè5Ø¥°´Z†ƒ6¢Ý¨iµ û:è¤Çð1i‘$ì9Udsl¡2†®œ‡–ÈÕ ’½‹×úVªgר+L 0#¸>Í endstream endobj 621 0 obj << /Type /FontDescriptor /Ascent 759 /CapHeight 713 /Descent -261 /Flags 34 /FontBBox [ -156 -272 1136 993 ] /FontName /BLOLEN+JansonText-Bold /ItalicAngle 0 /StemV 127 /XHeight 443 /CharSet (/g/f/n/period/r/space/h/s/i/L/W/t/l/u/k/B/T/m/b/x/colon/C/o/d/F/c/R/p/a/\ e/E/S) /FontFile3 622 0 R endobj 622 0 obj << /Filter /FlateDecode /Length 3719 /Subtype /Type1C >> stream H‰Ä–}PùÇ7¼Ô”%z·«»¹¨¨wÕ““´öªˆw§žz‚Žš«Š¼Dð!†÷�a³Ùd“ „D^4ZTBÀ pßð¥ÕÖ×Úi =Ï:ÎÍÍɵs3Ö'ÜÏiÄöÚÎ9sÿuwfggÏoçûý|ŸggEXD&‰¦­XûñÚ÷׿µ&=¿P�¿IU¬�¿B�›5º”ŒÆ‚¤(8=,8#<¤$SÑBÔ>=BóݾHÊ+úÇ�]%bpGÃà”#Óñ¥1X„H¤eo²ZSR°;;G+Ÿ›9O ¿dñbyR–:C%ßXR¨UåÊWçgª 4ê‚t­k�\ž”›+O ­/”§ª UºÐÓ1UòQYòÝ…òt¹¶ =K•—^°G®Þ%»;­-ѨäÉê†- ÃcØŠ(lm$¶1 SbØ!LXŽÍÄÒE¢ò°µá‘ᦈĈ;‘ÙãtbL#¾ •54þã ³'Ü›¸E²DÒ2É}WZ=)Ú2yÒä‚)áS®I¥U� º á ¾-“ý9þ ;r²ÓÒº²ºüýÔè:Œ‡„˜PÍ4ÜûŠ|' �ø¡Ý°Œ Àõø10ÂG°Ѱ ¾‘0ÙI{ý /þü³�ÞK-Z‚^§óÆeFœ‡ßùöÜûóstHŒÈL}s�>“¡8VŸ¬3³U<ã¢~uÑv ~ÞÒ Q@ü“«€7YAŒž¤ÓÜ5 OØÎi ½økïï¿%Ϋž¢, ‰w2ûRRwJ¥v¯Écò\ôh"-…¦òÀH\@Ô «ÂGrFe‡XF0V‹ËŠ’{Ð4}žˆ´hj†õHÜíû-Þ¯i—ÍÉ[¢x_ãm„“å5Lè°Pö¨æ Ed!b½pÁJX¢›¼é§›�HÇA‡£ÓxÍ4D�¡9€��Ì—p’øpèF9æ5÷_^Ýõ6ˆ Æ l‘(ó\£@{Ït>§‰++aª–JiK÷T”�ÈÄÙý44øÄ0¡NŸÐHëìV‹“tÖñÏyŸDátpO;œÝ°¼‘¨«µ &®’¡Ð››Ö±HAÆ×x.ÐÁ$1L<÷= A8l±•R)©LJJjÊ÷ xÿef…È À­€è7%ü4<4 ãd'; ‡ÿrbé¢%ªUHJí ͆?÷a ÐúÈO[›r]dG·íxï«DQÏñíõ¾ŠVٴ΢´ô¯é˜œvÂQoç­Ô½c×ûžW²QK72¹?  ‹•k,ªÜ±¨¤½åÍ‘‘¸–’Žü@Ì9˜’<烕#ËdÇ-fG5‰+ ÕÅ�©Ñ9„|ç ê]B‰ˆ¸Åˆ ‘îõ¢uÄj厤›ÎÝͧZLl]%YÌXji|ån×,~/Ua«³4�8÷”þ +CäUÝÐæ^�ÿ-ðþ•¸qzðo4¡ù-d³›wÑöÊ!ö~(Î.6021ÐVBqÞrD)SÛô$¾°¢Öh°Ð¬nŠ-% u¬Ëmçœ~óÁ…þ0…¼ÍY>¤ŸÏùq‚sÄ78ëï@îûô� ”Ái³xHÁÃÕ»·Ã}D[“à>m­s×[ Ôæ-Ìfå'J‰ÂÀ˜ôeÖÚþcžÌžÆ½]³!ìùì>Uêä«ZÉý¼³ÞAwÿz¶Ýƒ¤(¯Ëf³“ÇVót/ÂhÆeä…ô#d³“ïh /Â[†¼'PØ~¹ÓÞl‹ò…>˜MD“Ù^M¥ä1š UºDQn²{¹¦V?—QjÊFúŠV,-ɰNŸ‘.ùíèé )ö èµÛ=^³ÍD)•Œr«rk¨¯¸ªòÎÔi¡Ý–&¡6ªµ¸u^Q^SYk¤¤[ôWÀ¬âlL„¡0ˆ1àøÓà,x$û%÷Íͪ4§'SËÓŠ‹r‰¼Ce‡{ è¦:ˆ–þ¶–@ ?›z[Œ[Q‘ùúOö®'Öõå^>{ú°ßOe¹|Ú òÊå Mv½ËLW°…²ÌìqÓ?"3ˆ‹”º’]ÆcÞ«G÷?ôQÒ^½ß¤ . €½þž¢˜?[C�ÜÛá®ìl÷Éóß8‘l)JØ ÉX•ê”Eàˆ«wB�ÝLᛪ×äW-§·›¶‚0Úu±¶®j ñ¼M\Àd ¦Ñl£øöã�$~Äwã1Ù ;÷C%ÌŒˆ9;ËóvÓ!·ƒãü¢PÀrˆn±¬#5™œ¹ù‹'wÚaüWôã$4ù)ÙÕÚîkÓý Å£YHü6š‹f<™4L¿õ7»Pá6…ÆR?œôÒKÌ yÁñáàxˆ‘¥fk¶½SïÜÝÿì›ãgK—QáÍÎpf�(v ÂÑ$sk�ç[ºOÐ�¥ÔD˜ ,oït’¬V—ƒX¡ÆJíª ~"¸AŒÇþ'‰Y¯ !­Ð߃m~˜â�€×! B¿lépPÖl·zÏü&£¥ˆ\€þI|µÆDu¦a :ƒ.™&œžn÷œæ|”Æ.6íÆín­)´éö‚6ˆ,¨Á;S¹é4T¹È2÷˹Ì9g®g˜©0£" 0¶°ƒZínmÖÚhÝf¥f·!jÓøcÍ~Ó|ûcÏX�­)µþêÿóæ¼ïó>—÷ËF¿,N¢µp \7 sC~“Û ûK~M¢Uµ_]r !œ“øÜás§R‡wmÓZìÅ-ïb ¥B; 1x=%Ϻrˉ¼«òÄá;éSøùÞãV mþ€ ‹ ˆìâLU¨¡¿‰Ô­ÓÌv0tÎþÖJÁH´ t?uLÑÇwUdTÖïãhªž£Y+±Ÿg$>%N�Šç„EW¿ÏæÜzŸ8à‡p¥2$òáÖË”§P ¢È ,†Ë/ÄaŵÚdSûÅ}µü•{ÖGxì>;­yk/ʧXg§ ‹ÀH 5cÙÊl§.Ê×°:ÚœHD£‰DsT­nnVSªA±¢|>é h_dÜ 6¤'ñï»Ç�¥ÜchiÞÝèƒÊ©³£üãÉ~£Çn!šíêpÈ÷«hóQÂ5q|4ž�Þ#yø [‹•÷·½•@�‡ð Ÿõù¨3�§H‰In‡Ï›ØÆÆ¦ÆÜ+gµ›xÓà„°êÚ:ív°ný ½Noì o¢û u{¸û�6ݨÄÖ~…ò¥6Ó7‰—ëá?q/ï–䯣ÇîÍ »Aòï«ü‡ ¸¡¢©~;u¬CÞFn,×jÀAçáá¶¹ KRé~ò&oÉ,ë–²�©Ë©{Ï” ÍôšLÂX˜ ã|ÍË/�M¨h~íZTT…6¡2XTµbËסâß_ âë¯Hñ»§�H±FŽÔ ƒ»bé_œÎº•ºÙçQéÏæ„ä–}³éœo›.Sš¦&™8y ð>Ü5Á ´­ºnÏ 5ˆ¸ÂBd\´í�f-RR&ÖÁš o©”%Hkˆiñ�ÇGR7”ÿÛ m/¿Þ8ô1¥Ê—�‚øú…›1röê�,à®$� íWv{‡&��XùÿÇÈñGF'5”×ÔìҀؽAÍ7ÄÓ�FöÆó¦àJ˜%ÿ? ?� ©¦úŽSc‘É“±ê�ƒí ñ~Ò-¸\Ûèéº0D8Y·¬¢¤³­2PAþþ©ª'z¶Ï61–:­p¥œ9Ù H ãà@ç»%» 9¯œÔG“– òÊõäÁ¬öXMŸ ¥ÏNÂÅTFÑÚ1ì²lXpaÂ56FH´‘…ú'Žg9€]Öªkëž©Îá8Îa"̼]¾�Ѹâ•yÍ¿®œ˜óy¬=vŠmh­'K˜3ïØ ï¤¤ê´á\ÃüÏDÞeyíÏgæ¼ «ðÇùX0r(š :9ƒ�h÷2nï ×7La�#Þ„”påt ¼«—2Øëõ_¼ º÷Ÿ^ßEX›¹ÉFëiÑè¡&ãb"™Læ¸�BÏŸ‘©3t­‚¯~¸°â“¹¨÷ ì·’/À{-À¬eÕê6sn�žãtvÁÐí;Èw¶½ñZW°ôl¹"ç€W¹=|·•êjb7>צ—ÝÙa±Ñ¼ÞëÃl€óšrTºžo¶‡ÑL¶§C 8øßl�?˜>‚ÿ€~J8˜ endstream endobj 623 0 obj << /Type /FontDescriptor /Ascent 723 /CapHeight 713 /Descent -262 /Flags 70 /FontBBox [ -202 -288 1146 962 ] /FontName /BLOLJB+JansonText-Italic /ItalicAngle -15 /StemV 77 /XHeight 448 /CharSet (/g/z/k/dotlessi/O/A/two/m/Q/o/parenleft/R/p/a/S/q/K/U/N/emdash/B/idieres\ is/r/space/V/T/b/C/s/W/d/F/c/comma/D/t/e/E/G/hyphen/u/f/P/n/I/fi/period/\ v/Z/h/fl/x/J/w/dieresis/i/semicolon/L/H/y/zero/j/M/l) /FontFile3 624 0 R endobj 624 0 obj << /Filter /FlateDecode /Length 8383 /Subtype /Type1C >> stream H‰œ”kp×Çw-[˜D°,»f¯�c 0¤IHRž Nb 1Ôø…[~cƒe[–VÚÕjõÚ•d,쀛B ¦N¡œ˜ iCBC�””G˜ 9—™T”m§ít&sgî‡{χÿù�ÿù“D| A’䈌ùÙóçfŒŸ›[Zi(]RPmœ�eÌÝXœ÷èsJt0¥Éhj\t¤":,¾VeT)x5îL�ßòãÒ&Lþ$�oÕ†? Ý›šnI"âIÒÈK-³ e5ÅE댺±y¿Ð=;yÒ$ݬ|ÃÚ]NM¥± ¤R—Ušg¨(3Tä òŸÖéfmܨ[ü¨¾R·¸ ² ¢öúXšî‘6]q¥.Wg¬ÈÍ/(É­Ø 3êæ—Œ5eºÙ†’²ÜÒš+áEÝãnþÙÞ6J�±C $ˆAqÄP‚ÐDAèSˆ©‘1‚(! Aðá%ˆÏbVŒG(ˆ×‰¢…ø€ NšÈÓq¬b®â÷ñÙ ¿JøIyjÀÁĉ_üñ öÉïTç� ô®zîà”Á‘!3‡ÜZŸT”t]S•L&ÏN¾¡ÍÐ^N9<,sØñᓇ;b:5•ºJ§Ñ7Rå‘CF–Œ¼Ï„‘½§[¬ëµN?=M“¶(­0­+íCµ›ûßÜLÞùL�ªºj¡^bv5¼^Êmõ²�xÈ˘ ÍŒÅÃêðð©TŽ„_ãTfbOï¥}’Û0K…½hÉ= $œ Ôxt‘ èRôA¼¶²ÌX‰¦}{bÆœñoÖR¬‡õJ ¹{ÊN.Ô8 Vœ&ÀE7´ó¯ÍvçFpT)ÉGünÎmGj3(ÉPàéßjUÄÛå­KÊ %÷V¼½/Ò²�Q/·DúGEÈÓ½p±WEÍÚMÙN î^‚¶óPGKû®Cô®ÓLŒž�Û^Õ£¼Šõ› i®\ …wà¤ßɃ!˜À…cð5œ>¦©— —¢Ý“Ñ´?+¢ŠŸìIèøÁ�—ò4ºìLCIÎj9Vt{üA˜sй »{—º9çÃ_âÁÓð Þêµø×Å/éOß_•ñ|æ”yÏÃkV6Rœ‹÷0jø”I@‚ÇwiÂÿ–¦ƃü_À>¡v2ºìDàS­Ýîry·ï’бî#GŽ ùáy¸3�up,Kׯ×JnÎVaÂqÖµ(«�ox�.ã¹­¬èp#Ÿ$‚êæNY¥ŠŸ‡sÕÕ,ßÙÝËä³( úV0z)QØÖa…™ÓüV¯Æ™xãŠìÌ “ŸK´Z ¬™¶Úìõ�‚Ë‚²xó¶ìWé½®í¢ÛînlÈÞíb 9š®•ó³…ÚœÃ^Y¥ÒÛx›ÕX[PŸC´ÖQåŽMxÌÃ�ƒu�£Š6 u ë´»� ßÿ¡õ�½æ Tí„)‡¿›×õ\‰b‚fÄ!®†Rš½Eg2w08s©¶]öaúê'ÅãL¶‰SÖ ò7Ö Y´É"Êaywl@êÛ–öþgÛȾ[ŠþÄ蟴9~nm3-ù|²ˆÚ 9–)™s52fS-KošÖË?…ðÛSøœLárHÍ€Ñ7÷xüG™°Ç pɉÁƒ-ß„)Éâ´˜9n«•ÁCs°b‹�Æ)K/CܹsÑg€…‘¡‰•n¤†|¾­?½�ìO¿§€ÍýcµU�yXñÐÅd™8n]Ç׳vÁîA¡ b ð Véá™ë;þz�ï=Jˆ£ #x¿Ìl©áñ°Õ¯¯Séëy¶Ñ&Øv8ÐçvO£‰zxOl¾ d´†9r‰'iȼ5OÇkñ¼—Œ³I;‘ú4)o‡«8Òž�Ѫ©Ó>íæõÎÖ€7r|Èã”e}èïhG�Žmµ+ÖØL,eõò�ÛÓâcŽÀ¸W‚”׿¬cm¶y¥LeC­±˜Æ”ÒjqJ1uÀDôUXÜÞAƒNÙÜÄ•oC¯t„Êt̨Þm– ö/��•ÌŒq9¿wÔ¨ô¼ÍbjÜ.¼Ã£nNhtÖ$j8g}ÝJÌ%8n7}]}¥É¢ 5‚” ²é…è%ÞîtÚ¥"æ‡5ø" $§ðAº©C�Ôu_8Ó‚ÝG ^N}k«ÖKiÙdžîðCÚ×JÄýQí° •MYñ…ÈÀæåù|ƒ=—Åq¬›«ŽÐýe?2q<Çp/è§1pžqžáj¸»n1é§{(Û˼­,‹žå�["±MìdÕzŽdí[E†¶d#g z̺¤b󷉱íåìÇ18žq¦KÇ…Ê»ðû!lR!9ÐܲSZ�zŒÄØ5¶ÀIàEz¬$�ëªS+{+Û³ ÿ4215µ(µØ çãÌÒîÚ…ÒFqË\1. ¾ø\fê´ñ­5Ô�½¼ÕZR…}‰“œª¤ƒ!Þ S�ëÝæ «¶¢y+�AïBÓjlÛÝ&Âé=’S2 œ‡ÁÚÔAkXùí�p4ÓQ‚ ²}¬\©Ð¥êžH?y$Ô Ó’+é÷çÀ,T÷ 2›ˆÙhê5Í ¤"¾Ã­‚õBùðݶwÛ›¯(&]ñ‡}œ×6æóõU…oŒ'b š„‹Ð{wj!.aw“Ûþ„·�3š÷ýÎÅur.l€U�XŸF¯ µ¨­CÍôÝfL”4øg¸áÓëˆBÏ£4ô9Úh)øôúaÊ&pÐWvg%;¿"î%'§–˜ï}PnA¦ù P†=×Ûóµ}Ý 7¨„ŽRs;ÌÄHñf­˜rq–‘…¦O‚üž�¿D‹¡�õÎ:v5ÅŠ,ã’¼N (ݧÏam¨†¦‰~q®ØØ„ëõvÐéÓ•�ÿšã õ{øü;p% f_/¼ƒÌxŒKÐÒs—n–wW·á—f=|)Ñ7öTâ4óJû{ -fw°¯om ?£ÓºÑ|òÚ„72jè|sKù½ªF¬Ð|±ÃJ-ÈûCË dOÂZ>­pù_¤h¢ï[°Àÿ W?øQ¼Ñ=>¹?!¹29ÃüæÆ oµÛémj Ä/à xªÒO=ÈyuuÉKÛö�MëÍ0#ú�8™ Þrqe -Þç-¢^߯{J÷¾ši©9§;À'.{ÝÖÚ£y¥È v»ªØM”[p2¬Š]æÜ�räðŠaY‹j¢Æ<çmŒúõ¡3p h÷©¡fªÓïaTpÂÒÿÂ4Îdó]aТ tv¶��Šhš¤ ¿p„û­ ôúοišb÷t§æw�‡ôM˜wûhÏ„Ïàsæn7×ç¢<¬È[=Êx�Ĩѱ­«óý _PpíHÿú/Ë»Ðø/Aι7�_$OöÜ�ÿø«œ9i«.ó–caЀšIŸ¤¸i7[]SV8ð …Òg½ˆ2žÜ ŸD$×[TI¾��¿ðé‚ · æXjF"jÍ‚YpËÇ_á(ÎIm4ïqGë(1Íqºu>ŒÛðÑ .H7‚‚ÝîÚ�,£ÍÀºl—X"šöK‚è"Ý^ /¤Qå,$âç1í´ÐÏÉŒíåPÞŽµÂÇ“¦€â(83=~ ™cTȧû0ùŒZ˜%¢K÷šÈœÞŠ d‰g :ÆÃi÷'Àip�=ú×Õ�€Ó—èÿ0^í_QœgX°âž“”ª“:s2æxhÌÑZM)6Š Q#^ˆJP "d–eÁ]–ewgfï;³7Y@v¹¹\¹Ù àÝx!ˆÕX45�1§&¦±æÔ|Ëù8m?ô‡þ3Ïû~ïó¾Ïóœýé$Ôuütìj»·ÞÖåˆ,VÑb“8yÇ(¹CÊ”J|VøJ޵ë­ö#[ZôM\µO·-9”º…o�£?B¯z!s½†Ÿ†Aímg�Ë%ÅvGí§hr@ÙÜ–¼5Z¬¡ _¥5sGÁ¨ÑU¡¦JXuU&ó1Úšƒ^{›Š„³Åä<ÜÝßÌ„}“sÉ›¥º—¾bv»°Òé�v>W\°õþ Ë« .¶·×,WƒÝ›¦\oQz‹Îå©ó £°”Gc•áÉÕá6Ÿ—îô¸]‚É΂Ò$® $%g( i›îr§ Õª¹ŠìŠD ˆD‹QZ´ÆD†’UxäätAóB¾&¤°g2†l²Ù}hAjDlû@Åb®´, {V–ã5U6Ö3&êd€èÎØ8õÉ.Ê©t¹ký·®0 'áfXþø†Õ“3§.L£ V½Ë):½n¦ãëŒ�k$çþÔÙLû}¶‘Åâ®Oàó+°rñ¬ÎÅ{}cÆü²’÷×€ƒ¡¹(½ŽVãrÏ„d³Î ßÝgs .“ƒU 8¥ré\›�S±n¶iÄ$/—ñÛ´J :ŒTèÌ{k¦ûœ0wLFO3Ø7™@Ü„RL_}„&bÙ /×zÍXÉÎ:k‚ý-àb�ßÕEÃãS¿�fá[hâÕ‹á˜cë9:ô‡ˆ:�Y´ç…ækpè°TYSl ³§ ̘E£‹!sèù¬–öj¿§ 3ÏDÆj.¤}ÿÓ,ë™gùŠ>$« ®š;Ïàñv Åô7"ö†I ›­£‰ oE=õUâa:/—Û³ ,_µm_GIyç�ñcÃLµ§¾×EÛ°½Z$£]šÀ8œW8í¬�îV³!ö0­3ŠèÄ|¹ ~º ´@Ó ÿujÛñ“ û j'ˆï‡áNRýaYz:µsÈ䆙�„upv»¡7›‰G3Л<Í­»§‡"f6¶|2@¹ªÌ†ˆÓgê ›�Ò¨á5”Êg®©¯6ÉâûŠŠCè•©e 쀋ȥ«Q!ŽRÒê6¼ž‹Ê¥zoy .Å© ÍìóÝŸ †bÈî,_>Z²™É kÿ~ÿ:ü…ÝÁ9X¦<ï÷ét¶¶·ãÖyX|œ”¥^¦[êkŽü¸×Rï›îÆ=Aô}ý„¼g@Ý—mu¥%…9è E³%+uó~ºHÛè0q=©8ªjüì)Œ?ê0 ¸ƒTef\­çœ¢Ãðö˜°y\/úë{Y†÷‘†g4Ö¶ ôr¬è;PJò§Û¹s#ƒ= «µ¦ÁNÛÌF”~·€ŠD{5çC³›ÃºïÀä±™°!”J¢pô+ÌëÞ­ß|÷o0Ü)ð"Î{·�Þ­êïotž°Üz¢bˆ:�[¿—A Pù<8xç Ýà.ËÍ[‰æª@$jA ƒ™÷Cc=°§c Z5¹ŒP†æâ¨Ä²•ùˆx=DåEIyMHeòh¥òHin‡®u ÏßÒnœoúË¡�’j›]ôÐ0,B´·Y°K6€Øž3;œî6l þà ³¸è�Qºýh Ø�æ;¼xeêÊl KÓ¼W™o(çÍ‹t½‰/e)£ ‡’ÈXÔן6Áþìñ³¯ô�o¹7�î»Eüut†V�g-›J—§£¨dÅ!ñæh/\ý%@aSy¤gäˆá øÊõ§ª4êI šŸ¶ìøM8%ýø˜|ÚE¶(k‹˜Œ=™zMéà:…¥¢¸°ör©–"®UVî.ÞG èSÙ8‘iòýüI™Éš³h˜|õÑ­Û)þ?ºzýT×EU{ªE�áEŸ¹Ž&öÏ€+ƒpá50¬Ú([ý±Z¦]w �ÆØõÕ5õ ò ͈RšóbyB‘÷ˆ>"ç‹(§Õ)XÁIn  •<»|H·VËsÑ‘± iÒ™»;�–+‹Š k4ÍØÅÌ…øëþ¢¦¬œ¤CH² ðF^ÇÑF“Õ 8'”è}A¾í/].=Ú1õo-¥u°N¼C§47à£Ëpá“ypû“èQx— 4v6ÓN³†,ŠA£…”Ê\dÆÙç�pPnƒ™eØ ë‹iÖèA-ÞÈ¡~ª¹Ø¦Oz…iY»ÁfµZmv†HÄ’ÿgòÕR¿ûK‹Â/•R ÚÐ?)úæ†'ÑCðs²µ¡w¨•öØ8Ma:ŠFè[°¾X�-¥Õj·KÜj•KE+”¹´YÕ—Ÿ�4$ŒÍë:©�P…æ¯Eñ|ׇÛwÜ? o¿~…æR‘";øb ä!Ç;ƒônÑ-�a/5´Ðo…" ­ßš&ò$ÒÛ-Ö�¢…Ÿj^6LÞzà´œ,ÐdÊiÙÇ_<ΣA7ü�ê¬;´.yã®t†çY=O›ÌVpÃWàTðÿžË鈟I©y z<]¥ž&nzd©W ¸)ø—‹³³Œ&yi ]ipx€é¡‘“/ ’^~WÏjQæÔÃBl%rÌmù° &^KÎkžßù.ø ÆKØE9¬ÑD_ÿw�a ±Íáÿáêuɹ‘Z³�>Ôà Z�þËz™7u�qÜŽƒâ¦Ä-º˜÷0÷8 Є�È’¸…�¦RCXl°Áx©À»eI–eI–®®dIwÑjÉûnÙ.‹m¼±v™Ø,%4Æ@&dp€–%í Í :Óc÷%�ö¡}¾gîù–ó}ÿßÿ–�¶Ø«$f\ö¯ãRà/žúH½{Œ„·ÐåŒ <-ØýÀÅW²�jÉ%à0ZDWŠ6s^vãoˆq‹¬ÈAýJ,Ïú—v’ÕÞs†nSr²©–ý‚YµšÓ²À2S>JÎ�£eµôà ›6==7#•Ù°>íbMg?Ú½Sh-±² Sü¼ìõXžk$lÝèõ; à2WÂ"‚üÏ „|IÌQ‰¶Þ ”„2{3;åµuè“T-újÁí(ªšg%ÕÎW’�Å$�ZÜ‹¢PÙmz8AÈÿ4·'à0&KYež Œ(b÷uô±�vr–Y/ÃapT±I´ ¼äà O:í&3ÌËð# m"”•9 ‚è€Gïø‚¢;’jEïþù$Š˜îZ¶&�‰;’IÈhf â�um¡×ÚÂ;ÆÑîñ4€Æ£—¼…¶;® ½?…RÇû}×3ãÊè{ic±øÅ·ñû˜ÙÖþæ·hþ7h ZH~ Ç�NTØúéð^4OŽî~IÝ'þ«PM4k6qUPSe¶éÍ “•Z¬%í!ÕqŸª¼ÃÂé+Kñ¼gT˜wXŒ€3Z :Éì�=õü‰ÁQ~î+NÉåò˜xv§FlôJþ~´J»È AÕ@Ë® ÃCÂß­†û±DBsI³¬6RÐD¡ù–—v›m óÆ³ ÑTvçâýÍL½ïŒCÇÛòã·¥àŠr0ïP{�h4QAgèå¶ð汈ïpE´Îj°°´A050]A¾«çwÄ Õˆ¢×Å:X;Ìç˜ìfÁO; .¬õ3þA7‘cÉb7˜8«ÞÈ(µ†r0Y} 6j5|)X‚·PܔВÒwúXïÓ;PìOD˜ 0g~ �ŅƉV†>�ÞTƒíÆ2©ç„túþÔTÇ+òb‹6E]ÓXŠ ýuÃÕÜ6ySè=ªzz�Äód<¨˜ÏÐ|# GEúÝÍA/ >h± < :Zš;‚%�¹[÷mË~ ÷äJLÀjáI¥D÷-Tæ¢VÐ3‰²7w®�5ÏØFïqÄ\Ñl‚,^ôìÇÚêàDÒìv]Oˆé”y¡ý"u¥àTtÚÑÒ¾+O>Ÿ Rô•Œ0Ô~õÑ¡ l®ïdF¸ à µ®¡nxw ù£÷­ÍiH­Ï&Wz‡7fÓ)iJ-“èšHÔµZ¥†3¬Õ�ž ïCg"�<ôR´G´š¡n)¾¦¥‹}�O¿Gsç Ú°by:ñš¢�¿�j] °^j=�@UTjªj�×Ý)x,|Gf6;$èmŸi§ÕaböÊÌ,Áaxhygá"­Â VV¡Ï5“ÄÝ0Qw&} 4¿Ó¨”·Ôò7¨ èëßÜD œF; ¹Oð {ku‰°‰HÏ£Zš7 Ff«¬Ü‘�çí…œ^•–¼%2f]ö�ÒI~2ÜÃèUt©žöëø}û Z§„)ëŠÈ¬šE‘· væAÇõ�biÌd�™rˆ: ¤Ð)uš–ÐÕÖÅœ »‹€ v‹Rqúíx�š6JVçŒÌŒ¡»c>¥üæUä9AMN„¶Fkà9:ÎÉ .‡Ãé€ÍÎ@ø»¬ÎFŠ©'£¿¿@¿qUׄ~ý�>¨¡^cçþy©a š³Ç‰�%ú6EŽÐ¼ÅedÊ œD=œ½2þ —R.µœû#úùEÕ}´´—šÐA.W§ã°V²ˆ°Yñä ø^敪,°/Å«q$�Ô•qŠ¡œú‰ËSÂìÓßÈ AK� U£¨ÃšÓû K ‹ê%[2 ÷.Þ½&M7ÜÍPmaÓ×> endobj 626 0 obj << /Filter /FlateDecode /Length 3131 /Subtype /Type1C >> stream H‰lUiTW®¢©jpim¡viUP�UL\pÜhÂ"¢´ÝtC7»M³4â‚&. ¢àDDD ØA\@—$£Nb"'Ήs¢“cœ9Ç$3“[xÉ™©63óËsê¼sÞ}ïÝw¿ï}ß-’pt H’œ©Ž��‹Œõ 7k róL…£)Ó?Ò -µÚ׊.„È’âLq–Lä E~’êñÀ«3¯Q\+ùïššßÆIrø~Šèâroæäw\ Š$;† |ý"´°Èj6e-‚wæ\!(xñB?i žÿz\ „è u!ÑZl1ä Q™…æ¢B³ÖbÐBH^ž°Æ~¶XXc(6˜K¥hÔÚPᥠ¿Õ˜Š­6d›¤,fƒ^°˜µzC¾Öœ+fI9¥�Öb,Ðæ k­E†,m¦AýïERø�9_ãÃÂëø¹"H‰.b A¸8o„'AøÄ|‚x— „š Þs ¢H"ΑH¤‰9÷„#!'ˆ0â q›dÈFò+‡w  ¾�eÉþà(wLvÜäø •KÙhz ý³<^~T>(u wªuzäœàÜ0A1Á8a“‰ ³ð¥Âþ=®�±¦Œ¼‰2øhLÍàúƸ†%Ý£a0c ‘–vJÛÈûžjˆù¸ <Å…àIž†Tð�TÙiøžÁå�0 ’!ù„Ãr ‡8ÌÀŒpŒÃ~¬fpCêžMEš#°2. íiY�ó8E ø‹}e$è Y&ÎFìùòÌ蹡›�0P>T7Ð2ÐR_; ¿Áeè/¶ƒ¿\:´Tœ$l”‰·)ƒ-ô‡Gà ˜HݸÚ32ªotÅYèí ˆ^å „˜¨áÍ÷ï²OåfƧhµÙ,Laþ’~}i;¯üñ~«íÎçªÏŒ·×wq i± É,NG‡Hœ‘ÏÃ’"j ®«û {¾Ç¢³l+-­âwVäî+g× çZù¼­kÿ¶fþÄÖF‹E•ª³¤'dŸyŸS´–‚/x‹î× A&>_¼}DwÌ–ãÑqw ½¡}!|å˜Þ ºÑà:Jª>–Àì;°ŒiH‘‰ b#Ó Ë¨ñ4:œƒù%Tçñ³MÝì5[N²ÆhÚ Ñu îà1¬„B¡ &‡ËÂlžBà§¥wÒ/ñ½™qí‰lzÆVsøïö x–葃Ùéàëdâ°¸��i›çÀ'(£÷^ ô+ЂÌä¤Ú¦‹1Àº¶@„tw Ì5ª!EùL4‰ÎŒòш­ûÜÌ@~ ££¶ê“¸”ìüUqªèΤ۹œòYd¡.)Di[{'�+ º¿ã6 ø Ã"þϺka6Nù#2…ÇLM6ÕÇ=MÝ—ûKR ,›¶Ig‹0†9»­¯Â Òj¶›6&6ßìj;ÕÑd¯ÊûŸ¥KA ;!Q¤“ @ƒŠ/VÁDöåãsß]ç/?<þô‰ ¢W@�䜼pê³ ˜zórË@·¯ŒZ½%pyVÉ™ž]Ü=ˆg6”^¸ØßÞ7xµ#3MSLá ’jXðg—‘‡Ç8™¸Ú riÌ SSh¦Ñô«š‚ôE)ˆ¡%ýø07As5”¢¡ 拜d¡“�«íÞ …&fÍ óšhÖo78q¯<žÙ¦NØËCò)ͯ(‹y~¸Sq=8W2í|�ÿóx©¡´–Ž©ËÈ[”A›½ŸUƒ‘V £ øžÈC ëI(‡�P®,‡_#Ô”?ŽîÅÏC§òÒŽñGtuªÐj$}8|I Æ�#ƒæì™õg8e@ÛHÎs¶³m_m7ó们Ö=å¬9ûàÙ]ü_oÛÀé'¤¡ Ñèõ»ÂeQœ²|u¸y^�Êw�8 üÈ+ —jΑ ©ÉÍý¦Æƒ' oÃâûüµñn¤��ï ?Ýà´éPJ�Meë«ï îµ&·p ¦¢åƒ8‘�PoÕ'òkõ9Q‘\þFÌ\×Ã;µÃºÝE«6¶}jåÐñ%u¯éÆ'£,¸ýð¦7óv43p L�ÓðVMÄ•¹ —|<R•ib¯Û˜ZV¾¼+OÛšÊb@ŠÔ¾§ñÊè:„³Àë›g¸ý´² B^9Œy¿uK W±}ëŽ26nÏãþÎÚó‡:ù®CM'ϱ6¿¿õ0ß°¹º¸L%yqÚ‹—OÁ™<%½½l�‰3¤YúN³5��·ö�ðð·Ó4,õ\®^fþ‰UÓÔ†ã°÷‚¶uôÒšqõ^P4~07\²%ÆáÐ Ãä›[!(áS>:™mA6ù˜¸R>¬µPì�p” ÅRʇ€¨‘Ä!1qÿ¶?†-f‹ìmsX¶÷vÉýqï¹ç¼ç<ïyžç}Ó39µ 9Úòió1ÇñE!8ëÑÚ¥uùÖCx{ˆ'p@9}Ë=ÿÀV�ÍÉO m…BÄÿÆAØD,³ã(æxLÏÝî v_¯(áÈ3,•kô¼q¶b’›¨ÌèÊf“sªr x¸Lë,Æ1ëVa2TƒÏ¬$–ãø“¶cZöà[–H¨9qJ«H(5†? —Ë ŠDÂn‹N1 �'LÎÜbœ“ Ceày\íÎiOë¹ó y 4óŒÑnÃüƒ©«…¥�!ø•¾o¼Wèâ†Ês¬y쉬‹)ÅübYv›œ¥É¬ä ã\¥°Ã€Á½Ü ü…˜½îæ0CO\›^ôóÊ8ò1¯ÐŒwÁøèü7^žÖ¥fO}QuºJ óâg’,V^¸ t[cÙ@ë¸ïëV’8"‹D›‰…ÐH4ô÷ tbá‘m’8ž„n_iy_ {K닊J-°w¶å·¤‰ ¨bŸÖ÷.„� Ü~'œc<ÐÏ•=Í6«�³vØÌß³Ošžå™õ CQc%§i¬®­dÕW¦Gy¢†§Êâp]�;4£[Ÿ-¸–FøîÝxú0œq‚T $1U{JË!Ï^]zð�P¯Ô)îb¬Pà‹V2Ž7äcñáÍ, Gºú9‡mÄ4ÌZ­†ï:yÔ­¦UÂX?2f×$‹R“Á²Ô…ÍS4 êBž‘ ®7…@±XyÝ…ÈÅ Þ3ƒ|þí SŽ©ª-¿í¢)øRû×&[øÍ�ÎQçx}ñMb ’�’ƒc1lJJcY6¯./Ë9 Žôa´×r݆•¦/ÃËS º¼4Ól ''ÆËCTZôF•b r S }~쩎^©%ñFs„¨ž“t­Y?ËÞñÔL±K×:gxf öÓMzýU={¦Ý1ÆCÂkPÑLñ [´p²Këë0äOL‹žñúFU>3’yšÖ7W7ÔpÕ ú: ›ÛâD7ÿÇ7iF¯nÏ -â†ÜõìòÊÀ²�GÉ{ãiyD@!Û,¾þ ŸS”Ç~ê0éýz%$Ÿ‚Ô-³¢Q ïàmÌiáݶC�¿UœÉS ¤íshÊJ"mè¹'Å�l —í£Va=w™ J ŠÉ ^­C µ:…Ñ÷ÏýRE>ƒa8µ¢m'�]uÝM ™¯$ä+‘ò‡É·$‘ôI žz ñ¯0er]�ÿ˜�dY(è7m©m´Öî�µ“t[0Xnl öÎþIõî¥!¶„�vGÁ"•µHå¾ßÃü¬ò?޲fU endstream endobj 627 0 obj << /Type /FontDescriptor /Ascent 0 /CapHeight 0 /Descent 0 /Flags 4 /FontBBox [ -1 -143 981 820 ] /FontName /BLONNK+ZapfDingbats /ItalicAngle 0 /StemV 0 /CharSet (/a73) /FontFile3 628 0 R endobj 628 0 obj << /Filter /FlateDecode /Length 235 /Subtype /Type1C >> stream H‰bdabddqòñ÷óóÖŽJ,HsÉÌKOJ,)‰Ûþø!ÃøC–é‡,óqM æƒ<,?äxĺ~+ÿÊüy‚UnãÿînÉÃþ=Wà{ÿ÷bÁ)ß' 1°02²„¥'š{†8+€ÌV€®�Y¬�¨P”šžY\’Z”š¢PR”˜’š›X”­�Ÿ¦à™ÌK,ÉÌÏKÌQ©,HMKLNUpÎ/È/ ëa˜ˆâtÆ<Æv& ¤¿¯âûÙ» ì§G9㺟îÌ?³Ä~züuÿëÁÎW;óû»ßÎ0eÄ^ú endstream endobj 629 0 obj << /Type /Encoding /Differences [ 1 /space /C /o /p /y /r /i /g /h /t /copyright /two /zero /four /L /e /a /n /E /x /s /comma /period /A /l /v /d /u /I /P /hyphen /m /c /b /U /S /N /w /Y /k /f /D /colon /five /one /q /emdash /B /endash /seven /six /eight /parenleft /parenright /T /F /O /z /three /quoteright /nine /fl /M /G /J /W /R /j /Q /V /K /H /slash /fi /quotedblleft /quotedblright /semicolon /underscore /question /exclam /percent /Z /eacute /idieresis /dollar /X ] endobj 630 0 obj << /Filter /FlateDecode /Length 756 >> stream H‰d•[k1Fßó+ô˜RŠW£Ñ:�ÈRè…$íûÆ–ƒ¡^›µÍ¿¯ÎŒ(}È YÍêœÕjg×÷7÷ãæfß§Ýò± Ãz3®¦rؽNËžËËf QÂj³<¶_V—ÛafuòãÛáX¶÷ãz..NfõŸ‡ãôN?ãa7>•ßÇO»í0~ì>„Ù·iU¦ÍøNŸâ�Ÿ5x|Ýï•m�¡ ‹EX•õÉìú˰ÿ:lK˜ýßÄ®ˆm»U9ì‡e™†ñ¥„‹..ÂE«ÿwr&>åyí¿ýZ/�t‹c+�&‚ÄX-èï2c+Ýüœ gÜ{ sÆsŸbWœ1>÷N00 |Š<3¶Ò]Ú”%c+]²¦+Æ+l¥…qñÀz¬¯}é×5ˆ] [©·Í€F§í# Ñiû[h­Ô 6òèøó3ÈcÃGP„<:¾Øm!�Ž/Ö|+µ©Ý±ù°)¨ˆÍGO€Šè>zà"¢û˜ÛÂP݇b,¢"º�Œ ˆŠè>䦂 i>,@…4°¤ùÀº BÜGæ¶‚ qÙ¦ B܇B+¨�¶ xØ‚ q™• Ä}ôW¨�æ|A…¸_P!î#] BÚþ°…¡BÚþ@² BÜÇÜTxé"M’ûPh¬Ô+€K¨Hî#Á’P‘ÜGb¥ É}$öGBEjÛÁÈS÷ÛBžÚv° �'ÇW^±yjÛÚyjø¶0È­Ô•â4Až¾�[¹»ê òÔ^ž¾B® 8…};(·UÈÕñ³]¹¶í@S…\Ûé`SÀWßëŠ uZE…ºåé+´ùÀ˜¢B݇àCQa¥²XSThÃçRȵá[Sȵ½ ÖruüD�!ÏŽ/4Ívn¶³�-•!Ï ¸ yvü[$gȳãßZSȳã ›sØKíQáê™ü÷ðåxæûñ~Î/_§©~ì#c§?§üf,ïß¡ýnÏ¡ÎßÉ ‰{ endstream endobj 631 0 obj << /Type /Encoding /Differences [ 1 /space /L /e /a /r /n /i /g /E /x /p /s /S /k /l /B /u /d /F /o /c /W /t /T /m /b /comma /h /w /v /y /emdash /f /z /hyphen /period /j /semicolon /A /q /V /two /zero /M /D /R /P /fi /G /I /K /N /C /U /J /fl /H /Z /idieresis /O /Q /parenleft ] endobj 632 0 obj << /Filter /FlateDecode /Length 624 >> stream H‰l”[kÛ@Fßý+ö1¥kgtIÀš¸Ð MÚwYZA, Y�æßWgF òàaôy/s¾Ý�õÍîv×wSXÿOÍCšÂ¡ëÛ1�O/c“Â>=u}ˆÚ®™–/‹Í± ÂzžüðzžÒq×Na³Y­Îž§ñ5\|©ûó©L¦O»©~îš�Ù‡°þ>¶iìú§pñýž…‡—axNÇÔO! ÛmhÓaµ¾ùZßêc ëwV±!q)ãÔ¦óP7i¬û§6Y܆�¦mH}ûÿ«2ó)ûƒûX™dÛYr Y~ƒ ä²²@ÈÉs"BAn!«¡$/}Ä BE^¹p…pI~éB…pE~åÛÚ.5yí‹^"ìÉ÷.X¥ yã‚"´ä²Â„D~ð]®g!fsnaÞ…J#äÑñ+¶��GÇ/sÈ£ãç%äqÁ¿G€<:~ɶòèøpòèø•- ytüÂȣ㗷�GÇ/­RÈ£ã ç!�Ž_bP„<:~eÛ‚oa¬t¬ˆîGÅ1V,!R‡…¸%S+Äý¨>#…8¾P©@. ¾�€\ _q] ÇϹ0¹8~eäâø…m ¹8¾‚/�‹ã+§/�‹ãçVäâø¹±@.Ž_Ø�‹ã¬¡�[¸¿Î¨Cíå8~Ž… ¹.¯Ç|]®pŠê~äÜuÅ u?r®ƒb….×�;¦X¡‹8¦Xaa®ƒJ+ty œ­b…º…MÁ u?ŠºøaVèò .-m�“›�»Ã¿6@£ —½µœæe çnd Ïý¦ëÓ[O Ní…ßê¯TÞ>7 endstream endobj 633 0 obj << /Type /Encoding /Differences [ 1 /space /a73 ] endobj 634 0 obj << /Filter /FlateDecode /Length 213 >> stream H‰TP=¯Â0 Üû+<‚’2W]éï‰>ØÒÄ©"Q'rÒ�ÿ’ò%[ò�Ï>[ìº}G.�øe¯{L ÆègÖŽŽ Þ‚q:=«%ëIYÜßc©#ë¡iqÊdL|‡ÕU»w4Å�\ƒøaƒœXýÕçKú9„NH $´-´•Ø T8ª A|XØú¹ÜŒAidE#B#ëö‘�Ì7÷R öQ~Z)·²­²âÅq¹êí@ÏÌÙÜrúâ«xp„ïïÊÊտžmÉ endstream endobj 635 0 obj << /First 636 0 R /Count 17 /Last 637 0 R /Type /Outlines endobj 636 0 obj << /Parent 635 0 R /A 669 0 R /Next 667 0 R /Title (Table of Contents) endobj 637 0 obj << /Parent 635 0 R /A 638 0 R /Prev 639 0 R /Title (Chapter 15) endobj 638 0 obj << /D [ 453 0 R /Fit ] /S /GoTo endobj 639 0 obj << /Parent 635 0 R /A 640 0 R /Next 637 0 R /Prev 641 0 R /Title (Chapter 14) endobj 640 0 obj << /D [ 423 0 R /Fit ] /S /GoTo endobj 641 0 obj << /Parent 635 0 R /A 642 0 R /Next 639 0 R /Prev 643 0 R /Title (Chapter 13) endobj 642 0 obj << /D [ 399 0 R /Fit ] /S /GoTo endobj 643 0 obj << /Parent 635 0 R /A 644 0 R /Next 641 0 R /Prev 645 0 R /Title (Chapter 12) endobj 644 0 obj << /D [ 375 0 R /Fit ] /S /GoTo endobj 645 0 obj << /Parent 635 0 R /A 646 0 R /Next 643 0 R /Prev 647 0 R /Title (Chapter 11) endobj 646 0 obj << /D [ 345 0 R /Fit ] /S /GoTo endobj 647 0 obj << /Parent 635 0 R /A 648 0 R /Next 645 0 R /Prev 649 0 R /Title (Chapter 10) endobj 648 0 obj << /D [ 321 0 R /Fit ] /S /GoTo endobj 649 0 obj << /Parent 635 0 R /A 650 0 R /Next 647 0 R /Prev 651 0 R /Title (Chapter 9) endobj 650 0 obj << /D [ 291 0 R /Fit ] /S /GoTo endobj 651 0 obj << /Parent 635 0 R /A 652 0 R /Next 649 0 R /Prev 653 0 R /Title (Chapter 8) endobj 652 0 obj << /D [ 267 0 R /Fit ] /S /GoTo endobj 653 0 obj << /Parent 635 0 R /A 654 0 R /Next 651 0 R /Prev 655 0 R /Title (Chapter 7) endobj 654 0 obj << /D [ 237 0 R /Fit ] /S /GoTo endobj 655 0 obj << /Parent 635 0 R /A 656 0 R /Next 653 0 R /Prev 657 0 R /Title (Chapter 6) endobj 656 0 obj << /D [ 213 0 R /Fit ] /S /GoTo endobj 657 0 obj << /Parent 635 0 R /A 658 0 R /Next 655 0 R /Prev 659 0 R /Title (Chapter 5) endobj 658 0 obj << /D [ 183 0 R /Fit ] /S /GoTo endobj 659 0 obj << /Parent 635 0 R /A 660 0 R /Next 657 0 R /Prev 661 0 R /Title (Chapter 4) endobj 660 0 obj << /D [ 153 0 R /Fit ] /S /GoTo endobj 661 0 obj << /Parent 635 0 R /A 662 0 R /Next 659 0 R /Prev 663 0 R /Title (Chapter 3) endobj 662 0 obj << /D [ 123 0 R /Fit ] /S /GoTo endobj 663 0 obj << /Parent 635 0 R /A 664 0 R /Next 661 0 R /Prev 665 0 R /Title (Chapter 2) endobj 664 0 obj << /D [ 99 0 R /Fit ] /S /GoTo endobj 665 0 obj << /Parent 635 0 R /A 666 0 R /Next 663 0 R /Prev 667 0 R /Title (Chapter 1) endobj 666 0 obj << /D [ 75 0 R /Fit ] /S /GoTo endobj 667 0 obj << /Title (Introduction) /Next 665 0 R /Prev 636 0 R /Parent 635 0 R /A 668 0 R endobj 668 0 obj << /S /GoTo /D [ 57 0 R /Fit ] endobj 669 0 obj << /D [ 18 0 R /Fit ] /S /GoTo endobj 670 0 obj << /S /r endobj 671 0 obj << /S /D endobj 672 0 obj << /Nums [ 0 670 0 R 14 671 0 R ] endobj 673 0 obj << /Type /Pages /Kids [ 710 0 R 1 0 R 4 0 R 9 0 R 12 0 R 15 0 R 18 0 R 54 0 R 57 0 R 60 0 R ] /Count 10 /Parent 674 0 R endobj 674 0 obj << /Type /Pages /Kids [ 673 0 R 675 0 R 676 0 R 677 0 R 678 0 R 679 0 R 680 0 R 681 0 R 682 0 R 683 0 R ] /Count 100 /Parent 684 0 R endobj 675 0 obj << /Type /Pages /Kids [ 63 0 R 66 0 R 69 0 R 72 0 R 75 0 R 78 0 R 81 0 R 84 0 R 87 0 R 90 0 R ] /Count 10 /Parent 674 0 R endobj 676 0 obj << /Type /Pages /Kids [ 93 0 R 96 0 R 99 0 R 102 0 R 105 0 R 108 0 R 111 0 R 114 0 R 117 0 R 120 0 R ] /Count 10 /Parent 674 0 R endobj 677 0 obj << /Type /Pages /Kids [ 123 0 R 126 0 R 129 0 R 132 0 R 135 0 R 138 0 R 141 0 R 144 0 R 147 0 R 150 0 R ] /Count 10 /Parent 674 0 R endobj 678 0 obj << /Type /Pages /Kids [ 153 0 R 156 0 R 159 0 R 162 0 R 165 0 R 168 0 R 171 0 R 174 0 R 177 0 R 180 0 R ] /Count 10 /Parent 674 0 R endobj 679 0 obj << /Type /Pages /Kids [ 183 0 R 186 0 R 189 0 R 192 0 R 195 0 R 198 0 R 201 0 R 204 0 R 207 0 R 210 0 R ] /Count 10 /Parent 674 0 R endobj 680 0 obj << /Type /Pages /Kids [ 213 0 R 216 0 R 219 0 R 222 0 R 225 0 R 228 0 R 231 0 R 234 0 R 237 0 R 240 0 R ] /Count 10 /Parent 674 0 R endobj 681 0 obj << /Type /Pages /Kids [ 243 0 R 246 0 R 249 0 R 252 0 R 255 0 R 258 0 R 261 0 R 264 0 R 267 0 R 270 0 R ] /Count 10 /Parent 674 0 R endobj 682 0 obj << /Type /Pages /Kids [ 273 0 R 276 0 R 279 0 R 282 0 R 285 0 R 288 0 R 291 0 R 294 0 R 297 0 R 300 0 R ] /Count 10 /Parent 674 0 R endobj 683 0 obj << /Type /Pages /Kids [ 303 0 R 306 0 R 309 0 R 312 0 R 315 0 R 318 0 R 321 0 R 324 0 R 327 0 R 330 0 R ] /Count 10 /Parent 674 0 R endobj 684 0 obj << /Type /Pages /Kids [ 674 0 R 686 0 R ] /Count 192 endobj 685 0 obj << /Type /Pages /Kids [ 333 0 R 336 0 R 339 0 R 342 0 R 345 0 R 348 0 R 351 0 R 354 0 R 357 0 R 360 0 R ] /Count 10 /Parent 686 0 R endobj 686 0 obj << /Type /Pages /Kids [ 685 0 R 687 0 R 688 0 R 689 0 R 690 0 R 691 0 R 692 0 R 693 0 R 694 0 R 695 0 R ] /Count 92 /Parent 684 0 R endobj 687 0 obj << /Type /Pages /Kids [ 363 0 R 366 0 R 369 0 R 372 0 R 375 0 R 378 0 R 381 0 R 384 0 R 387 0 R 390 0 R ] /Count 10 /Parent 686 0 R endobj 688 0 obj << /Type /Pages /Kids [ 393 0 R 396 0 R 399 0 R 402 0 R 405 0 R 408 0 R 411 0 R 414 0 R 417 0 R 420 0 R ] /Count 10 /Parent 686 0 R endobj 689 0 obj << /Type /Pages /Kids [ 423 0 R 426 0 R 429 0 R 432 0 R 435 0 R 438 0 R 441 0 R 444 0 R 447 0 R 450 0 R ] /Count 10 /Parent 686 0 R endobj 690 0 obj << /Type /Pages /Kids [ 453 0 R 456 0 R 459 0 R 462 0 R 465 0 R 468 0 R 471 0 R 474 0 R 477 0 R 480 0 R ] /Count 10 /Parent 686 0 R endobj 691 0 obj << /Type /Pages /Kids [ 483 0 R 486 0 R 489 0 R 492 0 R 495 0 R 498 0 R 501 0 R 504 0 R 507 0 R 510 0 R ] /Count 10 /Parent 686 0 R endobj 692 0 obj << /Type /Pages /Kids [ 513 0 R 516 0 R 519 0 R 522 0 R 525 0 R 528 0 R 531 0 R 534 0 R 537 0 R 540 0 R ] /Count 10 /Parent 686 0 R endobj 693 0 obj << /Type /Pages /Kids [ 543 0 R 546 0 R 549 0 R 552 0 R 555 0 R 558 0 R 561 0 R 564 0 R 567 0 R 570 0 R ] /Count 10 /Parent 686 0 R endobj 694 0 obj << /Type /Pages /Kids [ 573 0 R 576 0 R 579 0 R 582 0 R 585 0 R 588 0 R 591 0 R 594 0 R 597 0 R 600 0 R ] /Count 10 /Parent 686 0 R endobj 695 0 obj << /Type /Pages /Kids [ 603 0 R 606 0 R ] /Count 2 /Parent 686 0 R endobj 696 0 obj << /Dt (D:20060112172316) /JTM (Distiller) endobj 697 0 obj /This endobj 698 0 obj << /CP (Distiller) /Fi 697 0 R endobj 699 0 obj << /R [ 1200 1200 ] endobj 700 0 obj << /JTF 0 /MB [ 0 0 612 792 ] /R 699 0 R /W [ 0 191 ] endobj 701 0 obj << /Fi [ 698 0 R ] /P [ 700 0 R ] endobj 702 0 obj << /Dm [ 612 792 612 792 ] endobj 703 0 obj << /MF false /Me 702 0 R endobj 704 0 obj << /D [ 701 0 R ] /MS 703 0 R /Type /JobTicketContents endobj 705 0 obj << /A [ 696 0 R ] /Cn [ 704 0 R ] /V 1.10001 endobj 706 0 obj << /CreationDate (D:20060112172316Z) /Author () /Creator (QuarkXPressª 4.11: AdobePS 8.5.1) /Producer (Acrobat Distiller 4.0 for Macintosh) /ModDate (D:20060119102416-05'00') /Title (501 Sentence Completion Questions) /Subject (1576855112) endobj 709 0 obj << /Outlines 635 0 R /Metadata 722 0 R /JT 705 0 R /Pages 684 0 R /Type /Catalog /PageLabels 672 0 R endobj 710 0 obj << /Type /Page /Parent 673 0 R /Resources 711 0 R /Contents 712 0 R /MediaBox [ 0 0 612 792 ] /CropBox [ 0 0 612 792 ] /Rotate 0 endobj 711 0 obj << /ProcSet [ /PDF /Text ] /Font << /F1 714 0 R /F2 716 0 R >> /ExtGState << /GS1 719 0 R >> endobj 712 0 obj << /Length 249 /Filter /FlateDecode >> stream H‰<�ÍNÃ0„ŸßaŽ‰Ô»þ‰}A%$U-q@œŠµjR E}}Ö E–µk{üí £§åjÃè'rhcÀðWÔÑ1v §7ÁÁ‹à®µßå_ûŠ‘ Ú$�±\IªàòÐ$Å3'xð­4^14³äÊèné>Óò‰ÁÈ Ù£K‹„Ø· .(_ÔF tZ¯ãRDÞê)Ÿi¡ø›¼W„Ì5ZZX#NgCEø¡?›2žÊ¸-x8Ÿ‡rÚ G¬ÊT›©â ³WºŸ•µGÄlð‚·wƒ�å5–‡�æ+¹k, (Õô> endobj 714 0 obj << /Type /Font /Subtype /Type1 /FirstChar 32 /LastChar 181 /Widths [ 300 320 460 600 600 700 720 300 380 380 600 600 300 240 300 600 600 600 600 600 600 600 600 600 600 600 300 300 600 600 600 540 800 640 660 660 660 580 540 660 660 300 400 640 500 880 660 660 620 660 660 600 540 660 600 900 640 600 660 380 600 380 600 500 380 540 540 540 540 540 300 560 540 260 260 560 260 820 540 540 540 540 340 500 380 540 480 740 540 480 420 380 300 380 600 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 300 600 600 300 300 300 300 300 740 300 300 300 300 300 300 300 600 300 300 300 540 ] /Encoding /WinAnsiEncoding /BaseFont /BLOGEG+FranklinGothic-Demi /FontDescriptor 713 0 R endobj 715 0 obj << /Type /FontDescriptor /Ascent 716 /CapHeight 716 /Descent -236 /Flags 32 /FontBBox [ -180 -213 1020 953 ] /FontName /BLOGEH+ArialMT-ExtraBold /ItalicAngle 0 /StemV 0 /XHeight 536 /CharSet (/five/K/n/I/r/space/h/six/s/seven/i/W/t/eight/zero/l/u/one/Y/nine/N/O/tw\ o/m/Q/C/three/o/c/R/d/p/a/S/e/E/four) /FontFile3 718 0 R endobj 716 0 obj << /Type /Font /Subtype /Type1 /FirstChar 32 /LastChar 181 /Widths [ 333 333 490 604 615 948 802 260 365 365 469 625 302 333 302 281 615 615 615 615 615 614 615 615 615 615 333 333 625 625 625 615 854 750 750 750 750 698 635 802 781 333 615 781 635 885 781 802 698 802 750 698 667 781 719 969 719 719 667 365 281 365 625 531 333 615 635 615 635 615 365 635 635 302 302 615 302 948 635 635 635 635 417 563 385 635 583 865 615 583 531 385 281 385 625 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 333 615 615 333 333 333 333 333 771 333 333 333 333 333 333 333 625 333 333 333 635 ] /Encoding /WinAnsiEncoding /BaseFont /BLOGEH+ArialMT-ExtraBold /FontDescriptor 715 0 R endobj 717 0 obj << /Filter /FlateDecode /Length 598 /Subtype /Type1C >> stream H‰l�ßKSaÇÏëܦvZ³š#Oí¼B¥ gé¬aiêt06Ä ÏŽîඳΎÆè"]”Š5²QA? ‹~‰¿n„²‹þÑ…^tÓE<¯¾;˺óæ{ñyx¾Ï÷û ¦¸ˆAq-í�žsžšóªŽÊq�¢Ed±®UŠÉ…±“”3„Cä9d Žb³4H§6Ç7�Ž,Úš˜ØVÖ ß÷ÀzùW²¶—1"ôlîÛêš[I¤Ty(¢á#âQ\ïj8U««ëä_uâæ°2 ážTR“bIÜ5¡¨‚&…�cÜ �boa7‰½RRRGuÚÖëÆÿƒâí¤XNb«Ò�¬»¨Rkª–b‚:Œ•AÝS‡qA“•¸Ž©„4(ˆvÿ;¤ã = íwà¼ÓŸÄ”#¦‚aŠ2ˆ· ¶¤¡ŒÄÀŒ^Áiè‡FÌ¥mX…vh‚¦³p�ú©ÏM;©‹ºr´|ü$ùa[z døLpêb°²ÿ¦ÿzÈáÓÄnΧ½û”æÇfÇççs7 á·Ã2V²|Á-¨7,‘:ôü2æí4L­d¬ÔK–i‰Ù¦6â¼\¢‡ÙžÑE°®C"‡Ái€yoƒEÓ4Tó­¦DþØGŒ¯~xü†û<Û×ѸÔÕÿrùO��)÷P'ع\nz~‘ÿ8óäËJåJj¡oÆñ6äËvq^ÿÕ�È[Ò™�– íʘàé�|ß]óhvãD–ú ”@æ^>�-udÝ)û[ ¦2¨Þ¦G, Õ÷ÙÝ“¬…üÜOjlöêa endstream endobj 718 0 obj << /Filter /FlateDecode /Length 3392 /Subtype /Type1C >> stream H‰tU PSé¾—�ä¢4(l(äBîE©ŠÝÅjå- È�Œ"C$��@„ „·Š0JU¨²PñÉCD|t\ñ±¸ÒÕUf׺vÚ�Z·ËÚžËþ±í�ÎN;ítæÎ�¹ßÿŸs¾ó�Ç% k+‚$IçÐØ„�á‘+Bôj¹&Næ ^bÐËCuš¬ùÃ@Ξàh’s±â\yœÔ…Ùò.ÙZsŒ­#ŠDêš~ˆãK;È64¼Û 9?;Nµ¸ÕETcOðqâ-¹˜t&ÝÉ_„éòMzu¶ÊÀz…ýŒýÈ×÷#6N§ÕLùJ6L§Ï×éåµNËæk«Ø�†Mš¿\È&) •úbeÿ›"«.då,þÈRæÉõ¹¬n'kP)ÿ¿WV¦Âùz]V‘Â0o¬Q+”ÚBeÖJ›°…ØåJV®Íbóä&V§Õ˜Ø J¶Ÿ³j-+W(tú,¹V¡d�jƒj>kPêó ÙÂ|¥B½Sýþ†cß{eC²õJežRkXõŽ9'cßQgßqÿû7ô¿%Àú‘±Ø†ð O‚XN+ âç$ácE¬#?‚%ˆ VDIÄYÉ<Âs^qkBH‰Ó$EV‘ÃV‹¬Ê­†x±¼+ÖÖë­S¬­GøžünÁÁ-Á?„ÞÂKS�l–ÚÄØì°yºàÃe ¦:-ì³åÙn·8‰ðS v\ؑñÄNŒ ;~»QºéÊÍ4ýÇ/†ž3Ýc­��/�Jo£à•Áа,©:¡Ìa,è€x|ÿ­ƒ}Òñæ3ç/ÓŸvÆ hV©/:ÒafªŽÕ´ï=NÕ^Ü;>!5´Ï·“pxp{.[lÜj6©rå;s’Mò>:1uáæ0e©ƒ€¹l¡¨ܹ4°'97ðçqwÅàŽ†…ÍÇ.^¼Ô}cÐéû¯†¦_H eOÆ_�žÜ‘ÚAûÈLòtF¾Ó˜QþeNŸ:ß;Nß¼ NÙµ§´¶œ1)KŒ¿Ü€‘›,ÈŒl(,ÅÕeÏ Á¹@�Ã7b‡ç_§Ý^Ý!}Øß}uR¤÷4bÐR¿Dä�'Ÿ|þäá³Ý×è+çò¶m‘g'í­Û×XÃT E|‡©æ¾_Õ·Óã3£#éI2uïg ò-â#‡ã¯CÁ‹Ÿ¿ý +¶àÊa5fÁ›Ç58û\}¸$´Tˆ^ZâùÈ f+DpN æ#³PÄVG€úÜIÎW š¹{bwËA¤ÅX#ÿBϹ�ûé{ýé~þŠ”„eß�j˜øèƒãßDÀ Vï x½ü5rÜjÜ]”Έª°ÛƒÀ p3DA2¬å�–[->0ÑÔÕrŽjí:ÝGžÊOifÀ·�B½ïmô�6º!)2#ókä€Izÿ <�öžE¢„ cÆæÍ6ñx_j°U¶I>öäáÀµ[ó¹sàE´ã! Ooˆá?�?¿;A��å8ò‚ß ³ÒÊ�9›óÓ ¦íÙkMÁåÈ~¯i¾_)(Ä4qÉ.ãŠ-ùøÃjä‰+÷Wn§ØáÕ3ù'A½ÒÉÞ®kw%ãyciÝÒSY›Z7ÐÈcU0³=©÷5Òì]•µEÅT®¦dKй~ë ëÝ_¿©Ôáš…âëÝÝ×G{Ô©©9ª´”œžëR‡Ù ô‰øÞÀÈ­±þ­Ñ1YÛdÉò‘»Òw¤\0©&XHöÂZ¬]4wì¨xOb]N¥†Ê,Ó(iyÁ©¸¾F>œœI{¢ÿ4T‚Ù OUò^â…<‘Ë7ž°h|¸mx�qoÊ }6ÝwíæÄ ,0<+-‰�­.<ȈçÍ%s‡Å°t»çgh-�r‘¹#'Ä€ B¾{ k.1ÈÿF­¦¾¶ªÈYfØšO˶u0Æžê�$°d|æû©áìõ�R‘a¸c1%8Wòsˆá͹îÃg¡1ø$è ¸pvàj™x+âÝ�Gy]û»{Ðâ8—‰bÞf E·ß£O0úÄb S€uYè1J¨ŸK‡:8ÂGUT…Æø0†bþÃ(ú= ¹P!œ·„ò!;^o‰ .Š�Bßf D3? pûÿ‹pæÄûÑO:ƒnGS!÷ŸkÀŽÆ=â ®†2„ÓAFdã— ÅÄFåù �bæÉä ¦Æ¿€–DUg+¥©:™&žNÉlë×0¹»F÷}AÍÇrà ȃ „xøv^ð3}âÄβCÊ3Ôø§§ŸÎïD3r~¢9ãI(G¥X)èŸm꜒^Ém©i3÷©í5S> wT¨ïm«Ú_¸†ª3T×›hDÆ} Ké�¯À ¬'ý�ûúÊLm:“8dlITZ,Ù’¦ ÷ÍìÿÃ.)ÚnJijÕ8{ýe,£ŸM¶œa  z&¹«¿•Ñ'•÷¦¶/?N‰ ñ}ßÀ¬#H¸�Xªø'¿¹L> ¬²lFBn3% Ð"KÒ‡\Ò¿˜¬ò˜&Ï0 Å~_³,ÍÖòÕÙoù¾£&‹†ËBÔ.N3q:ăc 2© ‚P®-EÚåhD(µ‡|€-”d ÇPQ.7ð2�Q\¢â4ÑdGØžoyqÙ[å�ýñþó¾Ïó{Ÿã÷ ´‡€­o“"Ó²¦ ó”ÿF ã$´ò[HI Ä¡@à(…€Ö¥š EÆ?œü:ð÷á6ȯ›¦$y5e–bG�äDÁd5÷‹Ì}Æ]ãfž’÷;µ=ÌdN¸}H#ùîchU AEÖ–:‹Í¾†ã<Ö.ºßS^be-ú3¦LÙ±RcuHÉ&mNùñ|¡¤WU¤IN’Ü;+L¸®�1Fø‚:RUdÔ0ÙÚtM ­Ìkëít·\w°ƒv·Ù^ë±­@ ƒ8=x ð�á7 $éÛo ô¦ýê˜86)¾0b«ìsWäåæÐµŸÔ‹4¬}z ü»YÚ"8 ;‘K™zýÕË�ɹk—Tq ¬Hn1"�¼?FÞæ_ÂyO…ä$/—B.Jª'†•‰®ý´dÑaëÑVº€V�ÿì˜wz˜E°�zÕ7~gºïPHHZôþˆ´Éç ¶Ó„/7…=°Ó�j ŽlþíVD•(™Ã Y;ä²µƒ¡�2ê�1Ãm>}<¼x÷0§ Y›¯;JG¤^~PÅò$±D¼ ÌäÌàÕ©»ÓvœcDr)6wÅ-PÀ.ߎQ 9^×vÞÍx¹Î¦nÚë6”8X[I]a‚ ua±ŸÉvSWU 3_öõh ýU|at< MHáëfKa²À9‹ënÖ‡f�ò R,q$(ø'¬ÆQXý^ÊǦe_p|Ê ÁÊ•å߯ÿ„àŒ#8ÏO��zÝ12ãvé‹íŒMw¶,E†FñóÒmꮺÀ<Òï ¦Ã“ cز=»èÝ êq¬dô> \â@AJæÁNvœm®s2#çº:†èÑæ¬0õá§waxÿír$C�ËsÔ{UBòÉæ³=5 3^ß×;J··ê5Ö¡©-Æc° «L“cÕ�§½Ìý¢ƒaôÎÃ9‰ïø ‚L± ,QŒiÖ6 )ê7Â߀ që’ƒ °oWÛ�üÐ҅᩹¹ËÑAɇÃÓ'^šp¢Pœ¶ab €xÙÌ#¸°æ—Í|û-¾¨'‡†´3´ä H0vbhÉž˜œƒJiÔÍÞ˜à ãÑö¦Lýú¼ü6+B7°ëB`0æ ìv/äÿÏí9ØNNÖxêú™»Ö‹½Ã´ËZ]ÚÉŽÁjÁ…TµEEÇfìÍÀÁÆExþ;Þ�6c¯‚ý¼°�2Ö–›�VM³Ú©f|S÷Œ†ÏÞÌþå©p�l™§Ð¥j8ÁéÌ‘\ƹOv5�wËÜm�ý#­¥JŽ� שöJÇiÇš’öLOfƒÐ‘­¬�¦CcJÓSYU®.å¨,³éD‡šéÏößäOž¼hä BKEÝi£,7³(%¶ y¢˜¡\�Ð�“€÷çg˜iÏø©�{�è.J#£jN™s˜�Ž‘ì×ôw®ZÛU¶É¬‘¾RP¡=Ym õ†z[[a7Ù®È é­‡(L”S¸Ä?ð\<³Ì-"…@É É^7£LÅLº1=OI窹Ör¶¼¥Ê;&ƒLýy2ΒΩ˜ø¶)Õ ºÀÚÔÅv668Ú/áµÚÎsˆt<òÿ'�úO€§÷Ÿ endstream endobj 719 0 obj << /Type /ExtGState /SA false /SM 0.02 /OP false /op false /OPM 1 /BG2 /Default /UCR2 /Default /TR2 /Default endobj 722 0 obj << /Type /Metadata /Subtype /XML /Length 2024 >> stream Acrobat Distiller 4.0 for Macintosh 2006-01-12T17:23:16Z 2006-01-19T10:24:16-05:00 501 Sentence Completion Questions QuarkXPressª 4.11: AdobePS 8.5.1 1576855112 2006-01-12T17:23:16Z QuarkXPressª 4.11: AdobePS 8.5.1 2006-01-19T10:24:16-05:00 2006-01-19T10:24:16-05:00 application/pdf 501 Sentence Completion Questions 1576855112 uuid:63313d5b-8443-11da-9609-003065ccc8c2 uuid:38357b2e-8444-11da-9609-003065ccc8c2 application/pdf 501 Sentence Completion Questions 1576855112 endstream endobj xref 0 723 0000000707 65535 f 0000000016 00000 n 0000000168 00000 n 0000000241 00000 n 0000000349 00000 n 0000000501 00000 n 0000000723 00000 n 0000005332 00000 n 0000005469 00000 n 0000005572 00000 n 0000005726 00000 n 0000005884 00000 n 0000006786 00000 n 0000006941 00000 n 0000007123 00000 n 0000008184 00000 n 0000008339 00000 n 0000008497 00000 n 0000009250 00000 n 0000009421 00000 n 0000009556 00000 n 0000009718 00000 n 0000009883 00000 n 0000010050 00000 n 0000010215 00000 n 0000010378 00000 n 0000010543 00000 n 0000010706 00000 n 0000010871 00000 n 0000011036 00000 n 0000011199 00000 n 0000011366 00000 n 0000011529 00000 n 0000011696 00000 n 0000011855 00000 n 0000012020 00000 n 0000012185 00000 n 0000012355 00000 n 0000012882 00000 n 0000012953 00000 n 0000013024 00000 n 0000013095 00000 n 0000013167 00000 n 0000013239 00000 n 0000013311 00000 n 0000013383 00000 n 0000013455 00000 n 0000013527 00000 n 0000013599 00000 n 0000013671 00000 n 0000013743 00000 n 0000013815 00000 n 0000013887 00000 n 0000013959 00000 n 0000014031 00000 n 0000014186 00000 n 0000014303 00000 n 0000014751 00000 n 0000014906 00000 n 0000015088 00000 n 0000016437 00000 n 0000016592 00000 n 0000016746 00000 n 0000018569 00000 n 0000018724 00000 n 0000018890 00000 n 0000020371 00000 n 0000020526 00000 n 0000020679 00000 n 0000022812 00000 n 0000022967 00000 n 0000023108 00000 n 0000025227 00000 n 0000025382 00000 n 0000025456 00000 n 0000025565 00000 n 0000025720 00000 n 0000025861 00000 n 0000026586 00000 n 0000026741 00000 n 0000026882 00000 n 0000028058 00000 n 0000028213 00000 n 0000028354 00000 n 0000029540 00000 n 0000029695 00000 n 0000029836 00000 n 0000030951 00000 n 0000031106 00000 n 0000031247 00000 n 0000032518 00000 n 0000032673 00000 n 0000032814 00000 n 0000033933 00000 n 0000034088 00000 n 0000034241 00000 n 0000035842 00000 n 0000035997 00000 n 0000036150 00000 n 0000037291 00000 n 0000037448 00000 n 0000037590 00000 n 0000038302 00000 n 0000038460 00000 n 0000038602 00000 n 0000039702 00000 n 0000039860 00000 n 0000040002 00000 n 0000041153 00000 n 0000041311 00000 n 0000041453 00000 n 0000042631 00000 n 0000042789 00000 n 0000042931 00000 n 0000044062 00000 n 0000044220 00000 n 0000044362 00000 n 0000045552 00000 n 0000045710 00000 n 0000045864 00000 n 0000047535 00000 n 0000047693 00000 n 0000047847 00000 n 0000049450 00000 n 0000049608 00000 n 0000049750 00000 n 0000050450 00000 n 0000050608 00000 n 0000050750 00000 n 0000051966 00000 n 0000052124 00000 n 0000052266 00000 n 0000053339 00000 n 0000053497 00000 n 0000053639 00000 n 0000054701 00000 n 0000054859 00000 n 0000055001 00000 n 0000056045 00000 n 0000056203 00000 n 0000056345 00000 n 0000057513 00000 n 0000057671 00000 n 0000057813 00000 n 0000058459 00000 n 0000058617 00000 n 0000058771 00000 n 0000060391 00000 n 0000060549 00000 n 0000060703 00000 n 0000062156 00000 n 0000062314 00000 n 0000062389 00000 n 0000062499 00000 n 0000062657 00000 n 0000062799 00000 n 0000063542 00000 n 0000063700 00000 n 0000063842 00000 n 0000064876 00000 n 0000065034 00000 n 0000065176 00000 n 0000066341 00000 n 0000066499 00000 n 0000066641 00000 n 0000067814 00000 n 0000067972 00000 n 0000068114 00000 n 0000069240 00000 n 0000069398 00000 n 0000069540 00000 n 0000070624 00000 n 0000070782 00000 n 0000070924 00000 n 0000071537 00000 n 0000071695 00000 n 0000071849 00000 n 0000073478 00000 n 0000073636 00000 n 0000073790 00000 n 0000075159 00000 n 0000075317 00000 n 0000075392 00000 n 0000075502 00000 n 0000075660 00000 n 0000075802 00000 n 0000076530 00000 n 0000076688 00000 n 0000076830 00000 n 0000077960 00000 n 0000078118 00000 n 0000078260 00000 n 0000079328 00000 n 0000079486 00000 n 0000079628 00000 n 0000080738 00000 n 0000080896 00000 n 0000081038 00000 n 0000082116 00000 n 0000082274 00000 n 0000082416 00000 n 0000083551 00000 n 0000083709 00000 n 0000083851 00000 n 0000084635 00000 n 0000084793 00000 n 0000084947 00000 n 0000086672 00000 n 0000086830 00000 n 0000086984 00000 n 0000088507 00000 n 0000088665 00000 n 0000088740 00000 n 0000088850 00000 n 0000089008 00000 n 0000089150 00000 n 0000089865 00000 n 0000090023 00000 n 0000090165 00000 n 0000091331 00000 n 0000091489 00000 n 0000091631 00000 n 0000092765 00000 n 0000092923 00000 n 0000093065 00000 n 0000094114 00000 n 0000094272 00000 n 0000094414 00000 n 0000095602 00000 n 0000095760 00000 n 0000095902 00000 n 0000096964 00000 n 0000097122 00000 n 0000097276 00000 n 0000098886 00000 n 0000099044 00000 n 0000099198 00000 n 0000100676 00000 n 0000100834 00000 n 0000100976 00000 n 0000101677 00000 n 0000101835 00000 n 0000101977 00000 n 0000103148 00000 n 0000103306 00000 n 0000103448 00000 n 0000104501 00000 n 0000104659 00000 n 0000104801 00000 n 0000105898 00000 n 0000106056 00000 n 0000106198 00000 n 0000107274 00000 n 0000107432 00000 n 0000107574 00000 n 0000108671 00000 n 0000108829 00000 n 0000108971 00000 n 0000109723 00000 n 0000109881 00000 n 0000110035 00000 n 0000111706 00000 n 0000111864 00000 n 0000112018 00000 n 0000113470 00000 n 0000113628 00000 n 0000113703 00000 n 0000113813 00000 n 0000113971 00000 n 0000114113 00000 n 0000114831 00000 n 0000114989 00000 n 0000115131 00000 n 0000116285 00000 n 0000116443 00000 n 0000116585 00000 n 0000117788 00000 n 0000117946 00000 n 0000118088 00000 n 0000119308 00000 n 0000119466 00000 n 0000119608 00000 n 0000120861 00000 n 0000121019 00000 n 0000121161 00000 n 0000122097 00000 n 0000122255 00000 n 0000122409 00000 n 0000124052 00000 n 0000124210 00000 n 0000124364 00000 n 0000125726 00000 n 0000125884 00000 n 0000126026 00000 n 0000126824 00000 n 0000126982 00000 n 0000127124 00000 n 0000128190 00000 n 0000128348 00000 n 0000128490 00000 n 0000129579 00000 n 0000129737 00000 n 0000129879 00000 n 0000131056 00000 n 0000131214 00000 n 0000131356 00000 n 0000132411 00000 n 0000132569 00000 n 0000132711 00000 n 0000133732 00000 n 0000133890 00000 n 0000134032 00000 n 0000134852 00000 n 0000135010 00000 n 0000135164 00000 n 0000136818 00000 n 0000136976 00000 n 0000137130 00000 n 0000138517 00000 n 0000138675 00000 n 0000138750 00000 n 0000138860 00000 n 0000139018 00000 n 0000139160 00000 n 0000139923 00000 n 0000140081 00000 n 0000140223 00000 n 0000141388 00000 n 0000141546 00000 n 0000141688 00000 n 0000142848 00000 n 0000143006 00000 n 0000143148 00000 n 0000144276 00000 n 0000144434 00000 n 0000144576 00000 n 0000145673 00000 n 0000145831 00000 n 0000145973 00000 n 0000147203 00000 n 0000147361 00000 n 0000147515 00000 n 0000149249 00000 n 0000149407 00000 n 0000149561 00000 n 0000150894 00000 n 0000151052 00000 n 0000151194 00000 n 0000151946 00000 n 0000152104 00000 n 0000152246 00000 n 0000153364 00000 n 0000153522 00000 n 0000153664 00000 n 0000154781 00000 n 0000154939 00000 n 0000155081 00000 n 0000156185 00000 n 0000156343 00000 n 0000156485 00000 n 0000157667 00000 n 0000157825 00000 n 0000157967 00000 n 0000159149 00000 n 0000159307 00000 n 0000159449 00000 n 0000160131 00000 n 0000160289 00000 n 0000160443 00000 n 0000162129 00000 n 0000162287 00000 n 0000162441 00000 n 0000163852 00000 n 0000164010 00000 n 0000164085 00000 n 0000164195 00000 n 0000164353 00000 n 0000164495 00000 n 0000165259 00000 n 0000165417 00000 n 0000165559 00000 n 0000166694 00000 n 0000166852 00000 n 0000166994 00000 n 0000168144 00000 n 0000168302 00000 n 0000168444 00000 n 0000169524 00000 n 0000169682 00000 n 0000169824 00000 n 0000171027 00000 n 0000171185 00000 n 0000171327 00000 n 0000172551 00000 n 0000172709 00000 n 0000172863 00000 n 0000174546 00000 n 0000174704 00000 n 0000174858 00000 n 0000176237 00000 n 0000176395 00000 n 0000176537 00000 n 0000177270 00000 n 0000177428 00000 n 0000177570 00000 n 0000178772 00000 n 0000178930 00000 n 0000179072 00000 n 0000180243 00000 n 0000180401 00000 n 0000180543 00000 n 0000181751 00000 n 0000181909 00000 n 0000182051 00000 n 0000183188 00000 n 0000183346 00000 n 0000183488 00000 n 0000184572 00000 n 0000184730 00000 n 0000184884 00000 n 0000186528 00000 n 0000186686 00000 n 0000186840 00000 n 0000188286 00000 n 0000188444 00000 n 0000188586 00000 n 0000189369 00000 n 0000189527 00000 n 0000189669 00000 n 0000190784 00000 n 0000190942 00000 n 0000191084 00000 n 0000192140 00000 n 0000192298 00000 n 0000192440 00000 n 0000193512 00000 n 0000193670 00000 n 0000193812 00000 n 0000194943 00000 n 0000195101 00000 n 0000195243 00000 n 0000196475 00000 n 0000196633 00000 n 0000196775 00000 n 0000197547 00000 n 0000197705 00000 n 0000197859 00000 n 0000199425 00000 n 0000199583 00000 n 0000199737 00000 n 0000201121 00000 n 0000201279 00000 n 0000201354 00000 n 0000201464 00000 n 0000201622 00000 n 0000201764 00000 n 0000202616 00000 n 0000202774 00000 n 0000202916 00000 n 0000204088 00000 n 0000204246 00000 n 0000204388 00000 n 0000205583 00000 n 0000205741 00000 n 0000205883 00000 n 0000207072 00000 n 0000207230 00000 n 0000207372 00000 n 0000208498 00000 n 0000208656 00000 n 0000208798 00000 n 0000209707 00000 n 0000209865 00000 n 0000210019 00000 n 0000211738 00000 n 0000211896 00000 n 0000212050 00000 n 0000213533 00000 n 0000213691 00000 n 0000213833 00000 n 0000214530 00000 n 0000214688 00000 n 0000214830 00000 n 0000215903 00000 n 0000216061 00000 n 0000216203 00000 n 0000217381 00000 n 0000217539 00000 n 0000217681 00000 n 0000218902 00000 n 0000219060 00000 n 0000219202 00000 n 0000220417 00000 n 0000220575 00000 n 0000220717 00000 n 0000221811 00000 n 0000221969 00000 n 0000222123 00000 n 0000223752 00000 n 0000223910 00000 n 0000224064 00000 n 0000225637 00000 n 0000225795 00000 n 0000225937 00000 n 0000226729 00000 n 0000226887 00000 n 0000227029 00000 n 0000228196 00000 n 0000228354 00000 n 0000228496 00000 n 0000229685 00000 n 0000229843 00000 n 0000229985 00000 n 0000231154 00000 n 0000231312 00000 n 0000231454 00000 n 0000232573 00000 n 0000232731 00000 n 0000232873 00000 n 0000233982 00000 n 0000234140 00000 n 0000234294 00000 n 0000235926 00000 n 0000236084 00000 n 0000236238 00000 n 0000237605 00000 n 0000237763 00000 n 0000237905 00000 n 0000238669 00000 n 0000238827 00000 n 0000238969 00000 n 0000240190 00000 n 0000240348 00000 n 0000240490 00000 n 0000241537 00000 n 0000241695 00000 n 0000241837 00000 n 0000242992 00000 n 0000243150 00000 n 0000243292 00000 n 0000244491 00000 n 0000244649 00000 n 0000244791 00000 n 0000245859 00000 n 0000246017 00000 n 0000246171 00000 n 0000247784 00000 n 0000247942 00000 n 0000248096 00000 n 0000249622 00000 n 0000249780 00000 n 0000249922 00000 n 0000250668 00000 n 0000250826 00000 n 0000250968 00000 n 0000252216 00000 n 0000252374 00000 n 0000252516 00000 n 0000253693 00000 n 0000253851 00000 n 0000253993 00000 n 0000255161 00000 n 0000255319 00000 n 0000255461 00000 n 0000256650 00000 n 0000256808 00000 n 0000256950 00000 n 0000258140 00000 n 0000258298 00000 n 0000258440 00000 n 0000259055 00000 n 0000259213 00000 n 0000259367 00000 n 0000261107 00000 n 0000261265 00000 n 0000261419 00000 n 0000262801 00000 n 0000262959 00000 n 0000263034 00000 n 0000263144 00000 n 0000263302 00000 n 0000263444 00000 n 0000264251 00000 n 0000264409 00000 n 0000264551 00000 n 0000265777 00000 n 0000265935 00000 n 0000266077 00000 n 0000267161 00000 n 0000267319 00000 n 0000267461 00000 n 0000268616 00000 n 0000268774 00000 n 0000268916 00000 n 0000270033 00000 n 0000270191 00000 n 0000270345 00000 n 0000271564 00000 n 0000271722 00000 n 0000271864 00000 n 0000272482 00000 n 0000272640 00000 n 0000272794 00000 n 0000274500 00000 n 0000274658 00000 n 0000274812 00000 n 0000276427 00000 n 0000276585 00000 n 0000276739 00000 n 0000277420 00000 n 0000278270 00000 n 0000278311 00000 n 0000523594 00000 n 0000524141 00000 n 0000524938 00000 n 0000525388 00000 n 0000526188 00000 n 0000526387 00000 n 0000526642 00000 n 0000527188 00000 n 0000527817 00000 n 0000537253 00000 n 0000537578 00000 n 0000541390 00000 n 0000541839 00000 n 0000550315 00000 n 0000550686 00000 n 0000553910 00000 n 0000554130 00000 n 0000554457 00000 n 0000554961 00000 n 0000555792 00000 n 0000556097 00000 n 0000556796 00000 n 0000556870 00000 n 0000557158 00000 n 0000557242 00000 n 0000557338 00000 n 0000557427 00000 n 0000557483 00000 n 0000557587 00000 n 0000557643 00000 n 0000557747 00000 n 0000557803 00000 n 0000557907 00000 n 0000557963 00000 n 0000558067 00000 n 0000558123 00000 n 0000558227 00000 n 0000558283 00000 n 0000558386 00000 n 0000558442 00000 n 0000558545 00000 n 0000558601 00000 n 0000558704 00000 n 0000558760 00000 n 0000558863 00000 n 0000558919 00000 n 0000559022 00000 n 0000559078 00000 n 0000559181 00000 n 0000559237 00000 n 0000559340 00000 n 0000559396 00000 n 0000559499 00000 n 0000559554 00000 n 0000559657 00000 n 0000559712 00000 n 0000559818 00000 n 0000559873 00000 n 0000559928 00000 n 0000559960 00000 n 0000559992 00000 n 0000560049 00000 n 0000560196 00000 n 0000560356 00000 n 0000560505 00000 n 0000560661 00000 n 0000560820 00000 n 0000560979 00000 n 0000561138 00000 n 0000561297 00000 n 0000561456 00000 n 0000561615 00000 n 0000561774 00000 n 0000561852 00000 n 0000562011 00000 n 0000562170 00000 n 0000562329 00000 n 0000562488 00000 n 0000562647 00000 n 0000562806 00000 n 0000562965 00000 n 0000563124 00000 n 0000563283 00000 n 0000563442 00000 n 0000563535 00000 n 0000563600 00000 n 0000563624 00000 n 0000563678 00000 n 0000563721 00000 n 0000563801 00000 n 0000563859 00000 n 0000563909 00000 n 0000563958 00000 n 0000564038 00000 n 0000564108 00000 n 0000000708 00001 f 0000000720 00001 f 0000564370 00000 n 0000564499 00000 n 0000564657 00000 n 0000564775 00000 n 0000565099 00000 n 0000565363 00000 n 0000566162 00000 n 0000566518 00000 n 0000567315 00000 n 0000568005 00000 n 0000571490 00000 n 0000000721 00001 f 0000000000 00001 f 0000571630 00000 n trailer << /Size 723 /Info 706 0 R /Root 709 0 R /ID[<32467dcb92fbe569185e5047f0a93328>] startxref 573740 %%EOF
188802
http://www.math.utoledo.edu/~melbial2/classes/Linear%20Algebra-2890/LinAlg_Dawkins.pdf
LINEAR ALGEBRA Paul Dawkins Linear Algebra © 2007 Paul Dawkins i Table of Contents Preface ............................................................................................................................................ ii Outline ........................................................................................................................................... iii Systems of Equations and Matrices ............................................................................................. 1 Introduction ................................................................................................................................................ 1 Systems of Equations ................................................................................................................................. 3 Solving Systems of Equations .................................................................................................................. 15 Matrices .................................................................................................................................................... 27 Matrix Arithmetic & Operations .............................................................................................................. 33 Properties of Matrix Arithmetic and the Transpose ................................................................................. 45 Inverse Matrices and Elementary Matrices .............................................................................................. 50 Finding Inverse Matrices .......................................................................................................................... 59 Special Matrices ....................................................................................................................................... 68 LU-Decomposition ................................................................................................................................... 75 Systems Revisited .................................................................................................................................... 81 Determinants ................................................................................................................................ 90 Introduction .............................................................................................................................................. 90 The Determinant Function ....................................................................................................................... 91 Properties of Determinants ......................................................................................................................100 The Method of Cofactors ........................................................................................................................107 Using Row Reduction To Compute Determinants ..................................................................................115 Cramer’s Rule .........................................................................................................................................122 Euclidean n-Space ..................................................................................................................... 125 Introduction .............................................................................................................................................125 Vectors ....................................................................................................................................................126 Dot Product & Cross Product ..................................................................................................................140 Euclidean n-Space ...................................................................................................................................154 Linear Transformations ...........................................................................................................................163 Examples of Linear Transformations ......................................................................................................173 Vector Spaces ............................................................................................................................. 181 Introduction .............................................................................................................................................181 Vector Spaces ..........................................................................................................................................183 Subspaces ................................................................................................................................................193 Span.........................................................................................................................................................203 Linear Independence ...............................................................................................................................212 Basis and Dimension ...............................................................................................................................223 Change of Basis ......................................................................................................................................239 Fundamental Subspaces ..........................................................................................................................252 Inner Product Spaces ...............................................................................................................................263 Orthonormal Basis ..................................................................................................................................271 Least Squares ..........................................................................................................................................283 QR-Decomposition .................................................................................................................................291 Orthogonal Matrices ...............................................................................................................................299 Eigenvalues and Eigenvectors .................................................................................................. 305 Introduction .............................................................................................................................................305 Review of Determinants ..........................................................................................................................306 Eigenvalues and Eigenvectors .................................................................................................................315 Diagonalization .......................................................................................................................................331 Linear Algebra © 2007 Paul Dawkins ii Preface Here are my online notes for my Linear Algebra course that I teach here at Lamar University. Despite the fact that these are my “class notes” they should be accessible to anyone wanting to learn Linear Algebra or needing a refresher. These notes do assume that the reader has a good working knowledge of basic Algebra. This set of notes is fairly self contained but there is enough Algebra type problems (arithmetic and occasionally solving equations) that can show up that not having a good background in Algebra can cause the occasional problem. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. 1. Because I wanted to make this a fairly complete set of notes for anyone wanting to learn Linear Algebra I have included some material that I do not usually have time to cover in class and because this changes from semester to semester it is not noted here. You will need to find one of your fellow class mates to see if there is something in these notes that wasn’t covered in class. 2. In general I try to work problems in class that are different from my notes. However, with a Linear Algebra course while I can make up the problems off the top of my head there is no guarantee that they will work out nicely or the way I want them to. So, because of that my class work will tend to follow these notes fairly close as far as worked problems go. With that being said I will, on occasion, work problems off the top of my head when I can to provide more examples than just those in my notes. Also, I often don’t have time in class to work all of the problems in the notes and so you will find that some sections contain problems that weren’t worked in class due to time restrictions. 3. Sometimes questions in class will lead down paths that are not covered here. I try to anticipate as many of the questions as possible in writing these notes up, but the reality is that I can’t anticipate all the questions. Sometimes a very good question gets asked in class that leads to insights that I’ve not included here. You should always talk to someone who was in class on the day you missed and compare these notes to their notes and see what the differences are. 4. This is somewhat related to the previous three items, but is important enough to merit its own item. THESE NOTES ARE NOT A SUBSTITUTE FOR ATTENDING CLASS!! Using these notes as a substitute for class is liable to get you in trouble. As already noted not everything in these notes is covered in class and often material or insights not in these notes is covered in class. Linear Algebra © 2007 Paul Dawkins iii Outline Here is a listing and brief description of the material in this set of notes. Systems of Equations and Matrices Systems of Equations – In this section we’ll introduce most of the basic topics that we’ll need in order to solve systems of equations including augmented matrices and row operations. Solving Systems of Equations – Here we will look at the Gaussian Elimination and Gauss-Jordan Method of solving systems of equations. Matrices – We will introduce many of the basic ideas and properties involved in the study of matrices. Matrix Arithmetic & Operations – In this section we’ll take a look at matrix addition, subtraction and multiplication. We’ll also take a quick look at the transpose and trace of a matrix. Properties of Matrix Arithmetic – We will take a more in depth look at many of the properties of matrix arithmetic and the transpose. Inverse Matrices and Elementary Matrices – Here we’ll define the inverse and take a look at some of its properties. We’ll also introduce the idea of Elementary Matrices. Finding Inverse Matrices – In this section we’ll develop a method for finding inverse matrices. Special Matrices – We will introduce Diagonal, Triangular and Symmetric matrices in this section. LU-Decompositions – In this section we’ll introduce the LU-Decomposition a way of “factoring” certain kinds of matrices. Systems Revisited – Here we will revisit solving systems of equations. We will take a look at how inverse matrices and LU-Decompositions can help with the solution process. We’ll also take a look at a couple of other ideas in the solution of systems of equations. Determinants The Determinant Function – We will give the formal definition of the determinant in this section. We’ll also give formulas for computing determinants of 2 2 × and 3 3 × matrices. Properties of Determinants – Here we will take a look at quite a few properties of the determinant function. Included are formulas for determinants of triangular matrices. The Method of Cofactors – In this section we’ll take a look at the first of two methods form computing determinants of general matrices. Using Row Reduction to Find Determinants – Here we will take a look at the second method for computing determinants in general. Cramer’s Rule – We will take a look at yet another method for solving systems. This method will involve the use of determinants. Euclidean n-space Vectors – In this section we’ll introduce vectors in 2-space and 3-space as well as some of the important ideas about them. Linear Algebra © 2007 Paul Dawkins iv Dot Product & Cross Product – Here we’ll look at the dot product and the cross product, two important products for vectors. We’ll also take a look at an application of the dot product. Euclidean n-Space – We’ll introduce the idea of Euclidean n-space in this section and extend many of the ideas of the previous two sections. Linear Transformations – In this section we’ll introduce the topic of linear transformations and look at many of their properties. Examples of Linear Transformations – We’ll take a look at quite a few examples of linear transformations in this section. Vector Spaces Vector Spaces – In this section we’ll formally define vectors and vector spaces. Subspaces – Here we will be looking at vector spaces that live inside of other vector spaces. Span – The concept of the span of a set of vectors will be investigated in this section. Linear Independence – Here we will take a look at what it means for a set of vectors to be linearly independent or linearly dependent. Basis and Dimension – We’ll be looking at the idea of a set of basis vectors and the dimension of a vector space. Change of Basis – In this section we will see how to change the set of basis vectors for a vector space. Fundamental Subspaces – Here we will take a look at some of the fundamental subspaces of a matrix, including the row space, column space and null space. Inner Product Spaces – We will be looking at a special kind of vector spaces in this section as well as define the inner product. Orthonormal Basis – In this section we will develop and use the Gram-Schmidt process for constructing an orthogonal/orthonormal basis for an inner product space. Least Squares – In this section we’ll take a look at an application of some of the ideas that we will be discussing in this chapter. QR-Decomposition – Here we will take a look at the QR-Decomposition for a matrix and how it can be used in the least squares process. Orthogonal Matrices – We will take a look at a special kind of matrix, the orthogonal matrix, in this section. Eigenvalues and Eigenvectors Review of Determinants – In this section we’ll do a quick review of determinants. Eigenvalues and Eigenvectors – Here we will take a look at the main section in this chapter. We’ll be looking at the concept of Eigenvalues and Eigenvectors. Diagonalization – We’ll be looking at diagonalizable matrices in this section. Linear Algebra © 2007 Paul Dawkins 1 Systems of Equations and Matrices Introduction We will start this chapter off by looking at the application of matrices that almost every book on Linear Algebra starts off with, solving systems of linear equations. Looking at systems of equations will allow us to start getting used to the notation and some of the basic manipulations of matrices that we’ll be using often throughout these notes. Once we’ve looked at solving systems of linear equations we’ll move into the basic arithmetic of matrices and basic matrix properties. We’ll also take a look at a couple of other ideas about matrices that have some nice applications to the solution to systems of equations. One word of warning about this chapter, and in fact about this complete set of notes for that matter, we’ll start out in the first section or to doing a lot of the details in the problems, but towards the end of this chapter and into the remaining chapters we will leave many of the details to you to check. We start off by doing lots of details to make sure you are comfortable working with matrices and the various operations involving them. However, we will eventually assume that you’ve become comfortable with the details and can check them on your own. At that point we will quit showing many of the details. Here is a listing of the topics in this chapter. Systems of Equations – In this section we’ll introduce most of the basic topics that we’ll need in order to solve systems of equations including augmented matrices and row operations. Solving Systems of Equations – Here we will look at the Gaussian Elimination and Gauss-Jordan Method of solving systems of equations. Matrices – We will introduce many of the basic ideas and properties involved in the study of matrices. Matrix Arithmetic & Operations – In this section we’ll take a look at matrix addition, subtraction and multiplication. We’ll also take a quick look at the transpose and trace of a matrix. Properties of Matrix Arithmetic – We will take a more in depth look at many of the properties of matrix arithmetic and the transpose. Inverse Matrices and Elementary Matrices – Here we’ll define the inverse and take a look at some of its properties. We’ll also introduce the idea of Elementary Matrices. Finding Inverse Matrices – In this section we’ll develop a method for finding inverse matrices. Special Matrices – We will introduce Diagonal, Triangular and Symmetric matrices in this section. Linear Algebra © 2007 Paul Dawkins 2 LU-Decompositions – In this section we’ll introduce the LU-Decomposition a way of “factoring” certain kinds of matrices. Systems Revisited – Here we will revisit solving systems of equations. We will take a look at how inverse matrices and LU-Decompositions can help with the solution process. We’ll also take a look at a couple of other ideas in the solution of systems of equations. Linear Algebra © 2007 Paul Dawkins 3 Systems of Equations Let’s start off this section with the definition of a linear equation. Here are a couple of examples of linear equations. 1 2 5 6 8 10 3 7 1 9 x y z x x − + = − = − In the second equation note the use of the subscripts on the variables. This is a common notational device that will be used fairly extensively here. It is especially useful when we get into the general case(s) and we won’t know how many variables (often called unknowns) there are in the equation. So, just what makes these two equations linear? There are several main points to notice. First, the unknowns only appear to the first power and there aren’t any unknowns in the denominator of a fraction. Also notice that there are no products and/or quotients of unknowns. All of these ideas are required in order for an equation to be a linear equation. Unknowns only occur in numerators, they are only to the first power and there are no products or quotients of unknowns. The most general linear equation is, 1 1 2 2 n n a x a x a x b + + = " (1) where there are n unknowns, 1 2 , , , n x x x … , and 1 2 , , , , n a a a b … are all known numbers. Next we need to take a look at the solution set of a single linear equation. A solution set (or often just solution) for (1) is a set of numbers 1 2 , , , n t t t … so that if we set 1 1 x t = , 2 2 x t = , … , n n x t = then (1) will be satisfied. By satisfied we mean that if we plug these numbers into the left side of (1) and do the arithmetic we will get b as an answer. The first thing to notice about the solution set to a single linear equation that contains at least two variables with non-zero coefficents is that we will have an infinite number of solutions. We will also see that while there are infinitely many possible solutions they are all related to each other in some way. Note that if there is one or less variables with non-zero coefficients then there will be a single solution or no solutions depending upon the value of b. Let’s find the solution sets for the two linear equations given at the start of this section. Example 1 Find the solution set for each of the following linear equations. (a) 1 2 5 7 1 9 x x − = − [Solution] (b) 6 8 10 3 x y z − + = [Solution] Solution (a) 1 2 5 7 1 9 x x − = − The first thing that we’ll do here is solve the equation for one of the two unknowns. It doesn’t matter which one we solve for, but we’ll usually try to pick the one that will mean the least Linear Algebra © 2007 Paul Dawkins 4 amount (or at least simpler) work. In this case it will probably be slightly easier to solve for 1 x so let’s do that. 1 2 1 2 1 2 5 7 1 9 5 7 1 9 5 1 63 7 x x x x x x − = − = − = − Now, what this tells us is that if we have a value for 2 x then we can determine a corresponding value for 1 x . Since we have a single linear equation there is nothing to restrict our choice of 2 x and so we we’ll let 2 x be any number. We will usually write this as 2 x t = , where t is any number. Note that there is nothing special about the t, this is just the letter that I usually use in these cases. Others often use s for this letter and, of course, you could choose it to be just about anything as long as it’s not a letter representing one of the unknowns in the equation (x in this case). Once we’ve “chosen” 2 x we’ll write the general solution set as follows, 1 2 5 1 63 7 x t x t = − = So, just what does this tell us as far as actual number solutions go? We’ll choose any value of t and plug in to get a pair of numbers 1 x and 2 x that will satisfy the equation. For instance picking a couple of values of t completely at random gives, ( ) 1 2 1 2 1 0: , 0 7 5 1 27 : 27 2, 27 63 7 t x x t x x = = − = = = − = = We can easily check that these are in fact solutions to the equation by plugging them back into the equation. ( ) ( ) ( ) 1 5 0: 7 0 1 7 9 5 27 : 7 2 27 1 9 t t ⎛ ⎞ = − − = − ⎜ ⎟ ⎝ ⎠ = − = − So, for each case when we plugged in the values we got for 1 x and 2 x we got -1 out of the equation as we were supposed to. Note that since there an infinite number of choices for t there are in fact an infinite number of possible solutions to this linear equation. [Return to Problems] Linear Algebra © 2007 Paul Dawkins 5 (b) 6 8 10 3 x y z − + = We’ll do this one with a little less detail since it works in essentially the same manner. The fact that we now have three unknowns will change things slightly but not overly much. We will first solve the equation for one of the variables and again it won’t matter which one we chose to solve for. 10 3 6 8 3 3 4 10 5 5 z x y z x y = − + = − + In this case we will need to know values for both x and y in order to get a value for z. As with the first case, there is nothing in this problem to restrict out choices of x and y. We can therefore let them be any number(s). In this case we’ll choose x t = and y s = . Note that we chose different letters here since there is no reason to think that both x and y will have exactly the same value (although it is possible for them to have the same value). The solution set to this linear equation is then, 3 3 4 10 5 5 x t y s z t s = = = − + So, if we choose any values for t and s we can get a set of number solutions as follows. ( ) ( ) ( ) 3 3 4 13 0 2 0 2 10 5 5 10 3 3 3 3 4 26 5 5 2 10 5 2 5 5 x y z x y z = = − = − + − = − ⎛ ⎞ = − = = − − + = ⎜ ⎟ ⎝ ⎠ As with the first part if we take either set of three numbers we can plug them into the equation to verify that the equation will be satisfied. We’ll do one of them and leave the other to you to check. ( ) 3 26 6 8 5 10 9 40 52 3 2 5 − ⎛ ⎞ ⎛ ⎞ − + = −− + = ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ [Return to Problems] The variables that we got to choose for values for ( 2 x in the first example and x and y in the second) are sometimes called free variables. We now need to start talking about the actual topic of this section, systems of linear equations. A system of linear equations is nothing more than a collection of two or more linear equations. Here are some examples of systems of linear equations. Linear Algebra © 2007 Paul Dawkins 6 1 2 3 1 2 1 3 1 2 1 2 3 1 1 1 2 3 4 5 1 2 4 2 1 2 3 4 5 4 5 9 6 9 2 3 9 10 2 5 3 7 2 13 7 4 5 3 10 4 1 3 2 9 0 7 10 3 6 9 7 x x x x x x y x x x x x y x x x x x x x x x x x x x x x x x x x − + = + = + = − + = − − − = − = − − − = − = − − + − + = + − + = + + + − = − As we can see from these examples systems of equation can have any number of equations and/or unknowns. The system may have the same number of equations as unknowns, more equations than unknowns, or fewer equations than unknowns. A solution set to a system with n unknowns, 1 2 , , , n x x x … , is a set of numbers, 1 2 , , , n t t t … , so that if we set 1 1 x t = , 2 2 x t = , … , n n x t = then all of the equations in the system will be satisfied. Or, in other words, the set of numbers 1 2 , , , n t t t … is a solution to each of the individual equations in the system. For example, 3 x = −, 5 y = is a solution to the first system listed above, 2 3 9 2 13 x y x y + = − = − (2) because, ( ) ( ) ( ) ( ) 2 3 3 5 9 & 3 2 5 13 − + = − − = − However, 15 x = − , 1 y = − is not a solution to the system because, ( ) ( ) ( ) ( ) 2 15 3 1 33 9 & 15 2 1 13 − + − = − ≠ − − − = − We can see from these calculations that 15 x = − , 1 y = − is NOT a solution to the first equation, but it IS a solution to the second equation. Since this pair of numbers is not a solution to both of the equations in (2) it is not a solution to the system. The fact that it’s a solution to one of them isn’t material. In order to be a solution to the system the set of numbers must be a solution to each and every equation in the system. It is completely possible as well that a system will not have a solution at all. Consider the following system. 4 10 4 3 x y x y − = − = − (3) It is clear (hopefully) that this system of equations can’t possibly have a solution. A solution to this system would have to be a pair of numbers x and y so that if we plugged them into each equation it will be a solution to each equation. However, since the left side is identical this would mean that we’d need an x and a y so that 4 x y − is both 10 and -3 for the exact same pair of numbers. This clearly can’t happen and so (3) does not have a solution. Linear Algebra © 2007 Paul Dawkins 7 Likewise, it is possible for a system to have more than one solution, although we do need to be careful here as we’ll see. Let’s take a look at the following system. 2 8 8 4 32 x y x y − + = − = − (4) We’ll leave it to you to verify that all of the following are four of the infinitely many solutions to the first equation in this system. 0, 8 3, 2, 4, 0 5, 18 x y x y x y x y = = = − = = − = = = Recall from our work above that there will be infinitely many solutions to a single linear equation. We’ll also leave it to you to verify that these four solutions are also four of the infinitely many solutions to the second equation in (4). Let’s investigate this a little more. Let’s just find the solution to the first equation (we’ll worry about the second equation in a second). Following the work we did in Example 1 we can see that the infinitely many solutions to the first equation in (4) are 2 8, is any number x t y t t = = + Now, if we also find just the solutions to the second equation in (4) we get 2 8, is any number x t y t t = = + These are exactly the same! So, this means that if we have an actual numeric solution (found by choosing t above…) to the first equation it will be guaranteed to also be a solution to the second equation and so will be a solution to the system (4). This means that we in fact have infinitely many solutions to (4). Let’s take a look at the three systems we’ve been working with above in a little more detail. This will allow us to see a couple of nice facts about systems. Since each of the equations in (2),(3), and (4) are linear in two unknowns (x and y) the graph of each of these equations is that of a line. Let’s graph the pair of equations from each system on the same graph and see what we get. Linear Algebra © 2007 Paul Dawkins 8 From the graph of the equations for system (2) we can see that the two lines intersect at the point ( ) 3,5 − and notice that, as a point, this is the solution to the system as well. In other words, in this case the solution to the system of two linear equations and two unknowns is simply the intersection point of the two lines. Note that this idea is validated in the solution to systems (3) and (4). System (3) has no solution and we can see from the graph of these equations that the two lines are parallel and hence will never intersect. In system (4) we had infinitely many solutions and the graph of these equations shows us that they are in fact the same line, or in some ways they “intersect” at an infinite number of points. Now, to this point we’ve been looking at systems of two equations with two unknowns but some of the ideas we saw above can be extended to general systems of n equations with m unknowns. First, there is a nice geometric interpretation to the solution of systems with equations in two or three unknowns. Note that the number of equations that we’ve got won’t matter the interpretation will be the same. Linear Algebra © 2007 Paul Dawkins 9 If we’ve got a system of linear equations in two unknowns then the solution to the system represents the point(s) where all (not some but ALL) the lines will intersect. If there is no solution then the lines given by the equations in the system will not intersect at a single point. Note in the no solution case if there are more than two equations it may be that any two of the equations will intersect, but there won’t be a single point were all of the lines will intersect. If we’ve got a system of linear equations in three unknowns then the graphs of the equations will be planes in 3D-space and the solution to the system will represent the point(s) where all the planes will intersect. If there is no solution then there are no point(s) where all the planes given by the equations of the system will intersect. As with lines, it may be in this case that any two of the planes will intersect, but there won’t be any point where all of the planes intersect at that point. On a side note we should point out that lines can intersect at a single point or if the equations give the same line we can think of them as intersecting at infinitely many points. Planes can intersect at a point or on a line (and so will have infinitely many intersection points) and if the equations give the same plane we can think of the planes as intersecting at infinitely many places. We need to be a little careful about the infinitely many intersection points case. When we’re dealing with equations in two unknowns and there are infinitely many solutions it means that the equations in the system all give the same line. However, when dealing with equations in three unknowns and we’ve got infinitely many solutions we can have one of two cases. Either we’ve got planes that intersect along a line, or the equations will give the same plane. For systems of equations in more than three variables we can’t graph them so we can’t talk about a “geometric” interpretation, but we can still say that a solution to such a system will represent the point(s) where all the equations will “intersect” even if we can’t visualize such an intersection point. From the geometric interpretation of the solution to two equations in two unknowns we know that we have one of three possible solutions. We will have either no solution (the lines are parallel), one solution (the lines intersect at a single point) or infinitely many solutions (the equations are the same line). There is simply no other possible number of solutions since two lines that intersect will either intersect exactly once or will be the same line. It turns out that this is in fact the case for a general system. Theorem 1 Given a system of n equations and m unknowns there will be one of three possibilities for solutions to the system. 1. There will be no solution. 2. There will be exactly one solution. 3. There will be infinitely many solutions. If there is no solution to the system we call the system inconsistent and if there is at least one solution to the system we call it consistent. Now that we’ve got some of the basic ideas about systems taken care of we need to start thinking about how to use linear algebra to solve them. Actually that’s not quite true. We’re not going to do any solving until the next section. In this section we just want to get some of the basic notation and ideas involved in the solving process out of the way before we actually start trying to solve them. Linear Algebra © 2007 Paul Dawkins 10 We’re going to start off with a simplified way of writing the system of equations. For this we will need the following general system of n equations and m unknowns. 11 1 12 2 1 1 21 1 22 2 2 2 1 1 2 2 m m m m n n nm m n a x a x a x b a x a x a x b a x a x a x b + + + = + + + = + + + = " " # " (5) In this system the unknowns are 1 2 , , , m x x x … and the i j a and i b are known numbers. Note as well how we’ve subscripted the coefficients of the unknowns (the i j a ). The first subscript, i, denotes the equation that the subscript is in and the second subscript, j, denotes the unknown that it multiples. For instance, 36 a would be in the coefficient of 6 x in the third equation. Any system of equations can be written as an augmented matrix. A matrix is just a rectangular array of numbers and we’ll be looking at these in great detail in this course so don’t worry too much at this point about what a matrix is. Here is the augmented matrix for the general system in (5). 11 12 1 1 21 22 2 2 1 2 m m n n nm n a a a b a a a b a a a b ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ " " # # # # " Each row of the augmented matrix consists of the coefficients and constant on the right of the equal sign form a given equation in the system. The first row is for the first equation, the second row is for the second equation etc. Likewise each of the first n columns of the matrix consists of the coefficients from the unknowns. The first column contains the coefficients of 1 x , the second column contains the coefficients of 2 x , etc. The final column (the n+1st column) contains all the constants on the right of the equal sign. Note that the augmented part of the name arises because we tack the i b ’s onto the matrix. If we don’t tack those on and we just have 11 12 1 21 22 2 1 2 m m n n nm a a a a a a a a a ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ " " # # # " and we call this the coefficient matrix for the system. Linear Algebra © 2007 Paul Dawkins 11 Example 2 Write down the augmented matrix for the following system. 1 2 3 4 1 3 4 1 2 3 4 3 10 6 3 9 5 12 4 9 2 7 x x x x x x x x x x x − + − = + − = − − + − + = Solution There really isn’t too much to do here other than write down the system. 3 10 6 1 3 1 0 9 5 12 4 1 9 2 7 − − ⎡ ⎤ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ Notice that the second equation did not contain an 2 x and so we consider its coefficient to be zero. Note as well that given an augmented matrix we can always go back to a system of equations. Example 3 For the given augmented matrix write down the corresponding system of equations. 4 1 1 5 8 4 9 2 2 − ⎡ ⎤ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Solution So since we know each row corresponds to an equation we have three equations in the system. Also, the first two columns represent coefficients of unknowns and so we’ll have two unknowns while the third column consists of the constants to the right of the equal sign. Here’s the system that corresponds to this augmented matrix. 1 2 1 2 1 2 4 1 5 8 4 9 2 2 x x x x x x − = − − = + = − There is one final topic that we need to discuss in this section before we move onto actually solving systems of equation with linear algebra techniques. In the next section where we will actually be solving systems our main tools will be the three elementary row operations. Each of these operations will operate on a row (which shouldn’t be too surprising given the name…) in the augmented matrix and since each row in the augmented matrix corresponds to an equation these operations have equivalent operations on equations. Here are the three row operations, their equivalent equation operations as well as the notation that we’ll be using to denote each of them. Row Operation Equation Operation Notation Multiply row i by the constant c Multiply equation i by the constant c i cR Interchange rows i and j Interchange equations i and j i j R R ↔ Add c times row i to row j Add c times equation i to equation j j i R cR + Linear Algebra © 2007 Paul Dawkins 12 The first two operations are fairly self explanatory. The third is also a fairly simple operation however there are a couple things that we need to make clear about this operation. First, in this operation only row (equation) j actually changes. Even though we are multiplying row (equation) i by c that is done in our heads and the results of this multiplication are added to row (equation) j. Also, when we say that we add c time a row to another row we really mean that we add corresponding entries of each row. Let’s take a look at some examples of these operations in action. Example 4 Perform each of the indicated row operations on given augmented matrix. 2 4 1 3 6 1 4 10 7 1 1 5 − − ⎡ ⎤ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ (a) 1 3R − [Solution] (b) 2 1 2 R [Solution] (c) 1 3 R R ↔ [Solution] (d) 2 3 5 R R + [Solution] (e) 1 2 3 R R − [Solution] Solution In each of these we will actually perform both the row and equation operation to illustrate that they are actually the same operation and that the new augmented matrix we get is in fact the correct one. For reference purposes the system corresponding to the augmented matrix give for this problem is, 1 2 3 1 2 3 1 2 3 2 4 3 6 4 10 7 5 x x x x x x x x x + − = − − − = + − = Note that at each part we will go back to the original augmented matrix and/or system of equations to perform the operation. In other words, we won’t be using the results of the previous part as a starting point for the current operation. (a) 1 3R − Okay, in this case we’re going to multiply the first row (equation) by -3. This means that we will multiply each element of the first row by -3 or each of the coefficients of the first equation by -3. Here is the result of this operation. 1 2 3 1 2 3 1 2 3 6 12 3 9 6 12 3 9 6 1 4 10 6 4 10 7 1 1 5 7 5 x x x x x x x x x − − − − + = ⎡ ⎤ ⎢ ⎥ − − ⇔ − − = ⎢ ⎥ ⎢ ⎥ − + − = ⎣ ⎦ [Return to Problems] Linear Algebra © 2007 Paul Dawkins 13 (b) 2 1 2 R This is similar to the first one. We will multiply each element of the second row by one-half or each coefficient of the second equation by one-half. Here are the results of this operation. 1 2 3 1 1 2 3 2 1 2 3 2 4 3 2 4 1 3 1 3 2 5 3 2 5 2 7 1 1 5 7 5 x x x x x x x x x + − = − − − ⎡ ⎤ ⎢ ⎥ − − ⇔ − − = ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ + − = Do not get excited about the fraction showing up. Fractions are going to be a fact of life with much of the work that we’re going to be doing so get used to seeing them. Note that often in cases like this we will say that we divided the second row by 2 instead of multiplied by one-half. [Return to Problems] (c) 1 3 R R ↔ In this case were just going to interchange the first and third row or equation. 1 2 3 1 2 3 1 2 3 7 1 1 5 7 5 6 1 4 10 6 4 10 2 4 1 3 2 4 3 x x x x x x x x x − + − = ⎡ ⎤ ⎢ ⎥ − − ⇔ − − = ⎢ ⎥ ⎢ ⎥ − − + − = − ⎣ ⎦ [Return to Problems] (d) 2 3 5 R R + Okay, we now need to work an example of the third row operation. In this case we will add 5 times the third row (equation) to the second row (equation). So, for the row operation, in our heads we will multiply the third row times 5 and then add each entry of the results to the corresponding entry in the second row. Here are the individual computations for this operation. ( )( ) ( )( ) ( )( ) ( )( ) 1 entry : 6 5 7 41 2 entry : 1 5 1 4 3 entry : 4 5 1 9 4 entry : 10 5 5 35 st nd rd th + = −+ = − + − = − + = For the corresponding equation operation we will multiply the third equation by 5 to get, 1 2 3 35 5 5 25 x x x + − = then add this to the second equation to get, 1 2 3 41 4 9 35 x x x + − = Putting all this together gives and remembering that it’s the second row (equation) that we’re actually changing here gives, Linear Algebra © 2007 Paul Dawkins 14 1 2 3 1 2 3 1 2 3 2 4 1 3 2 4 3 41 4 9 35 41 4 9 35 7 1 1 5 7 5 x x x x x x x x x − − + − = − ⎡ ⎤ ⎢ ⎥ − ⇔ + − = ⎢ ⎥ ⎢ ⎥ − + − = ⎣ ⎦ It is important to remember that when multiplying the third row (equation) by 5 we are doing it in our head and don’t actually change the third row (equation). [Return to Problems] (e) 1 2 3 R R − In this case we’ll not go into the detail that we did in the previous part. Most of these types of operations are done almost completely in our head and so we’ll do that here as well so we can start getting used to it. In this part we are going to subtract 3 times the second row (equation) from the first row (equation). Here are the results of this operation. 1 2 3 1 2 3 1 2 3 16 7 11 33 16 7 11 33 6 1 4 10 6 4 10 7 1 1 5 7 5 x x x x x x x x x − − − + + = − ⎡ ⎤ ⎢ ⎥ − − ⇔ − − = ⎢ ⎥ ⎢ ⎥ − + − = ⎣ ⎦ It is important when doing this work in our heads to be careful of minus signs. In operations such as this one there are often a lot of them and it easy to lose track of one or more when you get in a hurry. [Return to Problems] Okay, we’ve not got most of the basics down that we’ll need to start solving systems of linear equations using linear algebra techniques so it’s time to move onto the next section. Linear Algebra © 2007 Paul Dawkins 15 Solving Systems of Equations In this section we are going to take a look at using linear algebra techniques to solve a system of linear equations. Once we have a couple of definitions out of the way we’ll see that the process is a fairly simple one. Well, it’s fairly simple to write down the process anyway. Applying the process is fairly simple as well but for large systems can take quite a few steps. So, let’s get the definitions out of the way. A matrix (any matrix, not just an augmented matrix) is said to be in reduced row-echelon form if it satisfies all four of the following conditions. 1. If there are any rows of all zeros then they are at the bottom of the matrix. 2. If a row does not consist of all zeros then its first non-zero entry (i.e. the left most non-zero entry) is a 1. This 1 is called a leading 1. 3. In any two successive rows, neither of which consists of all zeroes, the leading 1 of the lower row is to the right of the leading 1 of the higher row. 4. If a column contains a leading 1 then all the other entries of that column are zero. A matrix (again any matrix) is said to be in row-echelon form if it satisfies items 1 – 3 of the reduced row-echelon form definition. Notice from these definitions that a matrix that is in reduced row-echelon form is also in row-echelon form while a matrix in row-echelon form may or may not be in reduced row-echelon form. Example 1 The following matrices are all in row-echelon form. 1 6 0 1 0 0 0 1 5 0 1 0 0 0 1 2 0 0 1 1 10 3 0 1 13 12 0 0 0 1 1 0 0 0 9 1 5 4 3 8 5 9 0 0 − ⎡ ⎤ ⎡ ⎤ ⎢ − − ⎥ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ None of the matrices in the previous example are in reduced row-echelon form. The entries that are preventing these matrices from being in reduced row-echelon form are highlighted in red and underlined (for those without color printers...). In order for these matrices to be in reduced row-echelon form all of these highlighted entries would need to be zeroes. Notice that we didn’t highlight the entries above the 1 in the fifth column of the third matrix. Since this 1 is not a leading 1 (i.e. the leftmost non-zero entry) we don’t need the numbers above it to be zero in order for the matrix to be in reduced row-echelon form. Linear Algebra © 2007 Paul Dawkins 16 Example 2 The following matrices are all in reduced row-echelon form. 0 1 0 8 1 0 0 0 0 0 1 5 0 1 0 0 0 0 0 0 1 9 0 0 2 1 7 10 0 0 1 0 16 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎢ ⎥ ⎣ ⎦ − ⎡ ⎤ − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ In the second matrix on the first row we have all zeroes in the entries. This is perfectly acceptable and so don’t worry about it. This matrix is in reduced row-echelon form, the fact that it doesn’t have any non-zero entries does not change that fact since it satisfies the conditions. Also, in the second matrix of the second row notice that the last column does not have zeroes above the 1 in that column. That is perfectly acceptable since the 1 in that column is not a leading 1 for the fourth row. Notice from Examples 1 and 2 that the only real difference between row-echelon form and reduced row-echelon form is that a matrix in row-echelon form is only required to have zeroes below a leading 1 while a matrix in reduced row-echelon from must have zeroes both below and above a leading 1. Okay, let’s now start thinking about how to use linear algebra techniques to solve systems of linear equations. The process is actually quite simple. To solve a system of equations we will first write down the augmented matrix for the system. We will then use elementary row operations to reduce the augmented matrix to either row-echelon form or to reduced row-echelon form. Any further work that we’ll need to do will depend upon where we stop. If we go all the way to reduced row-echelon form then in many cases we will not need to do any further work to get the solution and in those times where we do need to do more work we will generally not need to do much more work. Reducing the augmented matrix to reduced row-echelon form is called Gauss-Jordan Elimination. If we stop at row-echelon form we will have a little more work to do in order to get the solution, but it is generally fairly simple arithmetic. Reducing the augmented matrix to row-echelon form and then stopping is called Gaussian Elimination. At this point we should work a couple of examples. Example 3 Use Gaussian Elimination and Gauss-Jordan Elimination to solve the following system of linear equations. 1 2 3 1 2 3 1 3 2 4 2 3 13 3 1 x x x x x x x x − + − = + + = + = − Solution Since we’re asked to use both solution methods on this system and in order for a matrix to be in Linear Algebra © 2007 Paul Dawkins 17 reduced row-echelon form the matrix must also be in row-echelon form. Therefore, we’ll start off by putting the augmented matrix in row-echelon form, then stop to find the solution. This will be Gaussian Elimination. After doing that we’ll go back and pick up from row-echelon form and further reduce the matrix to reduced row echelon form and at this point we’ll have performed Gauss-Jordan Elimination. So, let’s start off by getting the augmented matrix for this system. 1 1 4 1 2 3 13 3 0 1 1 2 − ⎡ ⎤ ⎢ − ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ As we go through the steps in this first example we’ll mark the entry(s) that we’re going to be looking at in each step in red so that we don’t lose track of what we’re doing. We should also point out that there are many different paths that we can take to get this matrix into row-echelon form and each path may well produce a different row-echelon form of the matrix. Keep this in mind as you work these problems. The path that you take to get this matrix into row-echelon form should be the one that you find the easiest and that may not be the one that the person next to you finds the easiest. Regardless of which path you take you are only allowed to use the three elementary row operations that we looked in the previous section. So, with that out of the way we need to make the leftmost non-zero entry in the top row a one. In this case we could use any three of the possible row operations. We could divide the top row by -2 and this would certainly change the red “-2” into a one. However, this will also introduce fractions into the matrix and while we often can’t avoid them let’s not put them in before we need to. Next, we could take row three and add it to row one, or we could take three times row 2 and add it to row one. Either of these would also change the red “-2” into a one. However, this row operation is the one that is most prone to arithmetic errors so while it would work let’s not use it unless we need to. This leaves interchanging any two rows. This is an operation that won’t always work here to get a 1 into the spot we want, but when it does it will usually be the easiest operation to use. In this case we’ve already got a one in the leftmost entry of the second row so let’s just interchange the first and second rows and we’ll get a one in the leftmost spot of the first row pretty much for free. Here is this operation. 1 2 1 1 4 1 2 3 13 1 2 3 13 1 1 4 3 0 1 1 0 1 2 2 3 1 R R − ⎡ ⎤ ⎡ ⎤ ↔ ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ − − ⎣ − ⎣ ⎦ − ⎦ Now, the next step we’ll need to take is changing the two numbers in the first column under the leading 1 into zeroes. Recall that as we move down the rows the leading 1 MUST move off to the right. This means that the two numbers under the leading 1 in the first column will need to become zeroes. Again, there are often several row operations that can be done to do this. However, in most cases adding multiples of the row containing the leading 1 (the first row in this case) onto the rows we need to have zeroes is often the easiest. Here are the two row operations that we’ll do in this step. Linear Algebra © 2007 Paul Dawkins 18 2 1 3 1 1 2 3 13 2 1 2 3 13 1 1 4 3 0 5 2 5 30 0 1 1 0 6 8 40 3 R R R R − + ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − → − − − ⎣ ⎦ ⎣ ⎦ Notice that since each operation changed a different row we went ahead and performed both of them at the same time. We will often do this when multiple operations will all change different rows. We now need to change the red “5” into a one. In this case we’ll go ahead and divide the second row by 5 since this won’t introduce any fractions into the matrix and it will give us the number we’re looking for. 1 2 5 1 2 3 13 1 2 3 13 0 5 30 0 1 1 6 0 6 8 4 0 6 0 8 40 5 R ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ − − − − − ⎣ ⎦ − ⎣ ⎦ Next, we’ll use the third row operation to change the red “-6” into a zero so the leading 1 of the third row will move to the right of the leading 1 in the second row. This time we’ll be using a multiple of the second row to do this. Here is the work in this step. 3 2 6 2 1 2 3 13 1 2 3 13 6 0 1 1 6 0 1 1 6 0 8 40 0 0 4 R R ⎡ ⎤ ⎡ ⎤ + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ − − − ⎣ − ⎣ ⎦ − ⎦ Notice that in both steps were we needed to get zeroes below a leading 1 we added multiples of the row containing the leading 1 to the rows in which we wanted zeroes. This will always work in this case. It may be possible to use other row operations, but the third can always be used in these cases. The final step we need to get the matrix into row-echelon form is to change the red “-2” into a one. To do this we don’t really have a choice here. Since we need the leading one in the third row to be in the third or fourth column (i.e. to the right of the leading one in the second column) we MUST retain the zeroes in the first and second column of the third row. Interchanging the second and third row would definitely put a one in the third column of the third row, however, it would also change the zero in the second column which we can’t allow. Likewise we could add the first row to the third row and again this would put a one in the third column of the third row, but this operation would also change both of the zeroes in front of it which can’t be allowed. Therefore, our only real choice in this case is to divide the third row by -2. This will retain the zeroes in the first and second column and change the entry in the third column into a one. Note that this step will often introduce fractions into the matrix, but at this point that can’t be avoided. Here is the work for this step. 1 3 2 1 2 3 13 1 2 3 13 0 1 1 6 0 1 1 6 0 0 4 0 0 1 2 2 R ⎡ ⎤ ⎡ ⎤ − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ − ⎣ ⎦ At this point the augmented matrix is in row-echelon form. So if we’re going to perform Linear Algebra © 2007 Paul Dawkins 19 Gaussian Elimination on this matrix we’ll stop and go back to equations. Doing this gives, 1 2 3 2 3 3 1 2 3 13 2 3 13 0 1 1 6 6 0 0 1 2 2 x x x x x x + + = ⎡ ⎤ ⎢ ⎥ ⇒ + = ⎢ ⎥ ⎢ ⎥ = ⎣ ⎦ At this point solving is quite simple. In fact we can see from this that 3 2 x = . Plugging this into the second equation gives 2 4 x = . Finally, plugging both of these into the first equation gives 1 1 x = −. Summarizing up the solution to the system is, 1 2 3 1 4 2 x x x = − = = This substitution process is called back substitution. Now, let’s pick back up at the row-echelon form of the matrix and further reduce the matrix into reduced row-echelon form. The first step in doing this will be to change the numbers above the leading 1 in the third row into zeroes. Here are the operations that will do that for us. 1 3 2 3 1 2 13 3 1 0 7 0 1 6 0 1 0 4 3 2 0 0 1 2 0 1 1 0 2 R R R R − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → ⎣ ⎦ ⎣ ⎦ The final step is then to change the red “2” above the leading one in the second row into a zero. Here is this operation. 1 2 1 0 7 1 0 0 1 2 0 1 0 4 0 1 0 4 0 0 1 2 0 0 2 1 2 R R − ⎡ ⎤ ⎡ ⎤ − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ We are now in reduced row-echelon form so all we need to do to perform Gauss-Jordan Elimination is to go back to equations. 1 2 3 1 0 0 1 1 0 1 0 4 4 0 0 1 2 2 x x x − = − ⎡ ⎤ ⎢ ⎥ ⇒ = ⎢ ⎥ ⎢ ⎥ = ⎣ ⎦ We can see from this that one of the nice consequences to Gauss-Jordan Elimination is that when there is a single solution to the system there is no work to be done to find the solution. It is generally given to us for free. Note as well that it is the same solution as the one that we got by using Gaussian Elimination as we should expect. Before we proceed with another example we need to give a quick fact. As was pointed out in this example there are many paths we could take to do this problem. It was also noted that the path we chose would affect the row-echelon form of the matrix. This will not be true for the reduced row-echelon form however. There is only one reduced row-echelon form of a given matrix no matter what path we chose to take to get to that point. If we know ahead of time that we are going to go to reduced row-echelon form for a matrix we will often take a different path than the one used in the previous example. In the previous Linear Algebra © 2007 Paul Dawkins 20 example we first got the matrix in row-echelon form by getting zeroes under the leading 1’s and then went back and put the matrix in reduced row-echelon form by getting zeroes above the leading 1’s. If we know ahead of time that we’re going to want reduced row-echelon form we can just take care of the matrix in a column by column basis in the following manner. We first get a leading 1 in the correct column then instead of using this to convert only the numbers below it to zero we can use it to convert the numbers both above and below to zero. In this way once we reach the last column and take care of it of course we will be in reduced row-echelon form. We should also point out the differences between Gauss-Jordan Elimination and Gaussian Elimination. With Gauss-Jordan Elimination there is more matrix work that needs to be performed in order to get the augmented matrix into reduced row-echelon form, but there will be less work required in order to get the solution. In fact, if there’s a single solution then the solution will be given to us for free. We will see however, that if there are infinitely many solutions we will still have a little work to do in order to arrive at the solution. With Gaussian Elimination we have less matrix work to do since we are only reducing the augmented matrix to row-echelon form. However, we will always need to perform back substitution in order to get the solution. Which method you use will probably depend on which you find easier. Okay let’s do some more examples. Since we’ve done one example in excruciating detail we won’t be bothering to put as much detail into the remaining examples. All operations will be shown, but the explanations of each operation will not be given. Example 4 Solve the following system of linear equations. 1 2 3 1 2 3 1 2 3 2 3 2 2 3 2 3 1 x x x x x x x x x − + = − − + − = − + = Solution First, the instructions to this problem did not specify which method to use so we’ll need to make a decision. No matter which method we chose we will need to get the augmented matrix down to row-echelon form so let’s get to that point and then see what we’ve got. If we’ve got something easy to work with we’ll stop and do Gaussian Elimination and if not we’ll proceed to reduced row-echelon form and do Gauss-Jordan Elimination. So, let’s start with the augmented matrix and then proceed to put it into row-echelon form and again we’re not going to put in quite the detail in this example as we did with the first one. So, here is the augmented matrix for this system. 1 2 3 2 1 1 2 3 2 1 3 1 − − ⎡ ⎤ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ and here is the work to put it into row-echelon form. 2 1 2 3 1 1 2 3 2 1 2 3 2 1 2 3 2 1 1 2 3 2 0 1 1 1 0 1 1 1 2 1 3 1 0 3 3 5 0 3 3 5 R R R R R − − + − − − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − − − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − → − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ Linear Algebra © 2007 Paul Dawkins 21 1 3 2 3 8 1 2 3 2 1 2 3 2 3 0 1 1 1 0 1 1 1 0 0 0 8 0 0 0 1 R R R − − − − ⎡ ⎤ ⎡ ⎤ − ⎢ ⎥ ⎢ ⎥ − − − − ⎢ ⎥ ⎢ ⎥ → → ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ Okay, we’re now in row-echelon form. Let’s go back to equation and see what we’ve got. 1 2 3 2 3 2 3 2 1 0 1 x x x x x − + = − − = − = Hmmmm. That last equation doesn’t look correct. We’ve got a couple of possibilities here. We’ve either just managed to prove that 0=1 (and we know that’s not true), we’ve made a mistake (always possible, but we haven’t in this case) or there’s another possibility we haven’t thought of yet. Recall from Theorem 1 in the previous section that a system has one of three possibilities for a solution. Either there is no solution, one solution or infinitely many solutions. In this case we’ve got no solution. When we go back to equations and we get an equation that just clearly can’t be true such as the third equation above then we know that we’ve got not solution. Note as well that we didn’t really need to do the last step above. We could have just as easily arrived at this conclusion by looking at the second to last matrix since 0=8 is just as incorrect as 0=1. So, to close out this problem, the official answer that there is no solution to this system. In order to see how a simple change in a system can lead to a totally different type of solution let’s take a look at the following example. Example 5 Solve the following system of linear equations. 1 2 3 1 2 3 1 2 3 2 3 2 2 3 2 3 7 x x x x x x x x x − + = − − + − − + = − = Solution The only difference between this system and the previous one is the -7 in the third equation. In the previous example this was a 1. Here is the augmented matrix for this system. 1 2 3 2 1 1 2 3 2 1 3 7 − − ⎡ ⎤ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ Now, since this is essentially the same augmented matrix as the previous example the first few steps are identical and so there is no reason to show them here. After taking the same steps as above (we won’t need the last step this time) here is what we arrive at. 1 2 3 2 0 1 1 1 0 0 0 0 − − ⎡ ⎤ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Linear Algebra © 2007 Paul Dawkins 22 For some good practice you should go through the steps above and make sure you arrive at this matrix. In this case the last line converts to the equation 0 0 = and this is a perfectly acceptable equation because after all zero is in fact equal to zero! In other words, we shouldn’t get excited about it. At this point we could stop convert the first two lines of the matrix to equations and find a solution. However, in this case it will actually be easier to do the one final step to go to reduced row-echelon form. Here is that step. 1 2 1 2 3 2 1 0 1 4 2 0 1 1 1 0 1 1 1 0 0 0 0 0 0 0 0 R R − − − ⎡ ⎤ ⎡ ⎤ + ⎢ ⎥ ⎢ ⎥ − − − − ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ We are now in reduced row-echelon form so let’s convert to equations and see what we’ve got. 1 3 2 3 4 1 x x x x + = − − = − Okay, we’ve got more unknowns than equations and in many cases this will mean that we have infinitely many solutions. To see if this is the case for this example let’s notice that each of the equations has an 3 x in it and so we can solve each equation for the remaining variable in terms of 3 x as follows. 1 3 2 3 4 1 x x x x = −− = −+ So, we can choose 3 x to be any value we want to, and hence it is a free variable (recall we saw these in the previous section), and each choice of 3 x will give us a different solution to the system. So, just like in the previous section when we’ll rename the 3 x and write the solution as follows, 1 2 3 4 1 is any number x t x t x t t = −− = −+ = We therefore get infinitely many solutions, one for each possible value of t and since t can be any real number there are infinitely many choices for t. Before moving on let’s first address the issue of why we used Gauss-Jordan Elimination in the previous example. If we’d used Gaussian Elimination (which we definitely could have used) the system of equations would have been. 1 2 3 2 3 2 3 4 1 x x x x x − + = − − = − To arrive at the solution we’d have to solve the second equation for 2 x first and then substitute this into the first equation before solving for 1 x . In my mind this is more work and work that I’m Linear Algebra © 2007 Paul Dawkins 23 more likely to make an arithmetic mistake than if we’d just gone to reduced row-echelon form in the first place as we did in the solution. There is nothing wrong with using Gaussian Elimination on a problem like this, but the back substitution is definitely more work when we’ve got infinitely many solutions than when we’ve got a single solution. Okay, to this point we’ve worked nothing but systems with the same number of equations and unknowns. We need to work a couple of other examples where this isn’t the case so we don’t get too locked into this kind of system. Example 6 Solve the following system of linear equations. 1 2 1 2 1 2 3 4 10 5 8 17 3 12 12 x x x x x x − = − + = − − + = − Solution So, let’s start with the augmented matrix and reduce it to row-echelon form and see if what we’ve got is nice enough to work with or if we should go the extra step(s) to get to reduced row-echelon form. Let’s start with the augmented matrix. 3 4 10 5 8 17 3 12 12 − ⎡ ⎤ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ Notice that this time in order to get the leading 1 in the upper left corner we’re probably going to just have to divide the row by 3 and deal with the fractions that will arise. Do not go to great lengths to avoid fractions, they are a fact of life with these problems and so while it’s okay to try to avoid them, sometimes it’s just going to be easier to deal with it and work with them. So, here’s the work for reducing the matrix to row-echelon form. 10 10 4 4 2 1 3 3 3 3 1 1 3 4 1 3 1 3 3 3 4 10 1 5 1 5 8 17 5 8 17 3 0 3 12 12 3 12 12 0 8 2 R R R R R − − + − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − − + − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥→⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − − → − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ 10 10 4 4 3 3 3 3 3 3 2 2 4 1 1 4 4 1 1 8 0 1 0 1 0 8 2 0 0 0 R R R − − ⎡ ⎤ ⎡ ⎤ − ⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ → →⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ Okay, we’re in row-echelon form and it looks like if we go back to equations at this point we’ll need to do one quick back substitution involving numbers and so we’ll go ahead and stop here at this point and do Gaussian Elimination. Here are the equations we get from the row-echelon form of the matrix and the back substitution. 1 2 1 2 4 10 10 4 1 3 3 3 3 3 4 1 4 x x x x ⎛ ⎞ − = ⇒ = + − = ⎜ ⎟ ⎝ ⎠ = − Linear Algebra © 2007 Paul Dawkins 24 So, the solution to this system is, 1 2 1 3 4 x x = = − Example 7 Solve the following system of linear equations. 1 2 3 4 5 1 2 4 5 1 2 3 5 7 2 2 4 3 8 3 3 2 1 4 8 20 1 x x x x x x x x x x x x x + − − + = − − + + = − − − + = Solution First, let’s notice that we are guaranteed to have infinitely many solutions by the fact above since we’ve got more unknowns than equations. Here’s the augmented matrix for this system. 7 2 2 4 3 8 3 3 0 2 1 1 4 1 8 0 20 1 − − ⎡ ⎤ ⎢ ⎥ − − − ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ In this example we can avoid fractions in the first row simply by adding twice the second row to the first to get our leading 1 in that row. So, with that as our initial step here’s the work that will put this matrix into row-echelon form. 1 2 7 2 2 4 3 8 1 4 2 0 5 6 2 3 3 0 2 1 1 3 3 0 2 1 1 4 1 8 0 20 1 4 1 8 0 20 1 R R − − − − ⎡ ⎤ ⎡ ⎤ + ⎢ ⎥ ⎢ ⎥ − − − − − − ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ − − − − ⎣ ⎦ ⎣ ⎦ 2 1 2 3 3 1 3 1 4 2 0 5 6 1 4 2 0 5 6 4 0 15 6 2 16 17 0 15 0 0 0 23 0 15 0 0 0 23 0 15 6 2 16 17 R R R R R R + − − − − ⎡ ⎤ ⎡ ⎤ ↔ ⎢ ⎥ ⎢ ⎥ − − − − ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ → − − − ⎣ ⎦ ⎣ ⎦ 1 2 15 3 2 23 1 3 6 15 8 1 3 3 1 4 2 0 5 6 1 4 2 0 5 6 0 15 0 0 0 23 0 1 0 0 0 0 0 6 2 16 6 0 0 1 1 R R R R − − − − ⎡ ⎤ ⎡ ⎤ + ⎢ ⎥ ⎢ ⎥ − − − ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ − − → − − ⎣ ⎦ ⎣ ⎦ We are now in row-echelon form. Notice as well that in several of the steps above we took advantage of the form of several of the rows to simplify the work somewhat and in doing this we did several of the steps in a different order than we’ve done to this point. Remember that there are no set paths to take through these problems! Because of the fractions that we’ve got here we’re going to have some work to do regardless of whether we stop here and do Gaussian Elimination or go the couple of extra steps in order to do Gauss-Jordan Elimination. So with that in mind let’s go all the way to reduced row-echelon form so we can say that we’ve got another example of that in the notes. Here’s the remaining work. 2 1 3 3 1 3 23 23 15 15 8 8 1 1 3 3 3 3 1 4 2 0 5 6 1 4 0 8 2 0 1 0 0 0 0 1 0 0 0 0 0 1 1 0 0 1 1 R R − − − − − ⎡ ⎤ ⎡ ⎤ + ⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ − − − − ⎣ ⎦ ⎣ ⎦ Linear Algebra © 2007 Paul Dawkins 25 28 2 1 3 3 15 1 2 23 15 8 1 3 3 1 0 0 4 0 1 0 0 0 0 0 1 1 R R − − ⎡ ⎤ + ⎢ ⎥ − ⎢ ⎥ → ⎢ ⎥ − − ⎣ ⎦ We’re now in reduced row-echelon form and so let’s go back to equations and see what we’ve got. 1 4 5 1 4 5 2 3 4 5 3 4 5 2 1 28 28 2 1 3 3 15 15 3 3 23 15 1 8 1 8 1 1 3 3 3 3 x x x x x x x x x x x x x − − = ⇒ = + + = − − − = ⇒ = + + So, we’ve got two free variables this time, 4 x and 5 x , and notice as well that unlike any of the other infinite solution cases we actually have a value for one of the variables here. That will happen on occasion so don’t worry about it when it does. Here is the solution for this system. 1 2 3 4 5 28 2 1 23 1 8 1 15 3 3 15 3 3 and are any numbers x t s x x t s x t x s s t = + + = − = + + = = Now, with all the examples that we’ve worked to this point hopefully you’ve gotten the idea that there really isn’t any one set path that you always take through these types of problems. Each system of equations is different and so may need a different solution path. Don’t get too locked into any one solution path as that can often lead to problems. Homogeneous Systems of Linear Equations We’ve got one more topic that we need to discuss briefly in this section. A system of n linear equations in m unknowns in the form 11 1 12 2 1 21 1 22 2 2 1 1 2 2 0 0 0 m m m m n n nm m a x a x a x a x a x a x a x a x a x + + + = + + + = + + + = " " # " is called a homogeneous system. The one characteristic that defines a homogeneous system is the fact that all the equations are set equal to zero unlike a general system in which each equation can be equal to a different (probably non-zero) number. Hopefully, it is clear that if we take 1 2 3 0 0 0 0 m x x x x = = = = " we will have a solution to the homogeneous system of equations. In other words, with a homogeneous system we are guaranteed to have at least one solution. This means that Theorem 1 from the previous section can then be reduced to the following for homogeneous systems. Linear Algebra © 2007 Paul Dawkins 26 Theorem 1 Given a homogeneous system of n equations and m unknowns there will be one of two possibilities for solutions to the system. 4. There will be exactly one solution, 1 2 3 0, 0, 0, , 0 m x x x x = = = = " . This solution is called the trivial solution. 5. There will be infinitely many non-zero solutions in addition to the trivial solution. Note that when we say non-zero solution in the above fact we mean that at least one of the i x ’s in the solution will not be zero. It is completely possible that some of them will still be zero, but at least one will not be zero in a non-zero solution. We can make a further reduction to Theorem 1 from the previous section if we assume that there are more unknowns than equations in a homogeneous system as the following theorem shows. Theorem 2 Given a homogeneous system of n linear equations in m unknowns if m n > (i.e. there are more unknowns than equations) there will be infinitely many solutions to the system. Linear Algebra © 2007 Paul Dawkins 27 Matrices In the previous section we used augmented matrices to denote a system of linear equations. In this section we’re going to start looking at matrices in more generality. A matrix is nothing more than a rectangular array of numbers and each of the numbers in the matrix is called an entry. Here are some examples of matrices. [ ] [ ] 1 2 12 4 3 0 6 1 0 7 10 1 4 0 2 4 7 1 3 8 0 2 2 6 1 15 1 0 9 3 0 17 3 1 12 0 9 2 ⎡ ⎤ − − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ − ⎣ ⎦ − − − The size of a matrix with n rows and m columns is denoted by n m × . In denoting the size of a matrix we always list the number of rows first and the number of columns second. Example 1 Give the size of each of the matrices above. Solution 1 2 4 3 0 6 1 0 0 2 4 7 1 3 size : 3 6 6 1 15 1 0 − ⎡ ⎤ ⎢ ⎥ − − ⇒ × ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ 7 10 1 8 0 2 size : 3 3 9 3 0 − ⎡ ⎤ ⎢ ⎥ − ⇒ × ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ In this matrix the number of rows is equal to the number of columns. Matrices that have the same number of rows as columns are called square matrices. 12 4 size : 4 1 2 17 ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ ⇒ × ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ This matrix has a single column and is often called a column matrix. [ ] 3 1 12 0 9 size : 1 5 − − ⇒ × This matrix has a single row and is often called a row matrix. [ ] 2 size : 1 1 − ⇒ × Often when dealing with 1 1 × matrices we will drop the surrounding brackets and just write -2. Linear Algebra © 2007 Paul Dawkins 28 Note that sometimes column matrices and row matrices are called column vectors and row vectors respectively. We do need to be careful with the word vector however as in later chapters the word vector will be used to denote something much more general than a column or row matrix. Because of this we will, for the most part, be using the terms column matrix and row matrix when needed instead of the column vector and row vector. There are a lot of notational issues that we’re going to have to get used to in this class. First, upper case letters are generally used to refer to matrices while lower case letters generally are used to refer to numbers. These are general rules, but as you’ll see shortly there are exceptions to them, although it will usually be easy to identify those exceptions when they happen. We will often need to refer to specific entries in a matrix and so we’ll need a notation to take care of that. The entry in the ith row and jth column of the matrix A is denoted by, ( ) OR i j i j a A In the first notation the lower case letter we use to denote the entries of a matrix will always match with the upper case letter we use to denote the matrix. So the entries of the matrix B will be denoted by i j b . In both of these notations the first (left most) subscript will always give the row the entry is in and the second (right most) subscript will always give the column the entry is in. So, 4 9 c will be the entry in the 4th row and 9th column of C (which is assumed to be a matrix since it’s an upper case letter…). Using the lower case notation we can denote a general n m × matrix, A, as follows, 11 12 1 11 12 1 21 22 2 21 22 2 1 2 1 2 OR m m m m n n nm n n nm n m a a a a a a a a a a a a A A a a a a a a × ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ " " " " # # # # # # " " We don’t generally subscript the size of the matrix as we did in the second case, but on occasion it may be useful to make the size clear and in those cases we tend to subscript it as shown in the second case. The notation above for a general matrix is fairly cumbersome so we’ve also got some much more compact notation that we’ll use when we can. When possible we’ll use the following to denote a general matrix. i j i j n m n m a a A × × ⎡ ⎤ ⎡ ⎤ ⎣ ⎦ ⎣ ⎦ The first two we tend to use when we need to talk about the general entry of a matrix (such as certain formulas) but don’t really care what that entry is. Also, we’ll denote the size if it’s important or needed for whatever we’re doing, but otherwise we’ll not bother with the size. The third notation is really nothing more than the standard notation with the size denoted. We’ll use this only when we need to talk about a matrix and the size is important but the entries aren’t. We won’t run into this one too often, but we will on occasion. We will be dealing extensively with column and row matrices in later chapters/sections so we need to take care of some notation for those. There are the main exception to the upper case/lower case convention we adopted earlier for matrices and their entries. Column and row Linear Algebra © 2007 Paul Dawkins 29 matrices tend to be denoted with a lower case letter that has either been bolded or has an arrow over it as follows, [ ] 1 2 1 2 m n a a a b b b b a ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = = = = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ a b G G " # In written documents, such as this, column and row matrices tend to be in bold face while on the chalkboard of a classroom they tend to get arrows written over them since it’s often difficult on a chalkboard to differentiate a letter that’s in bold from one that isn’t. Also, notice with column and row matrices the entries are still denoted with lower case letters that match the letter that represents the matrix and in this case since there is either a single column or a single row there was no reason to double subscript the entries. Next we need to get a quick definition out of the way for square matrices. Recall that a square matrix is a matrix whose size is n n × (i.e. it has the same number of rows as columns). In a square matrix the entries 11 22 , , , nn a a a … (see the shaded portion of the matrix below) are called the main diagonal. The next topic that we need to discuss in this section is that of partitioned matrices and submatrices. Any matrix can be partitioned into smaller submatrices simply by adding in horizontal and/or vertical lines between selected rows and/or columns. Example 2 Here are several partitions of a general 5 3 × matrix. (a) 11 12 13 21 22 23 11 12 31 32 33 21 22 41 42 43 51 52 53 a a a a a a A A A a a a A A a a a a a a ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎡ ⎤ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ In this case we partitioned the matrix into four submatrices. Also notice that we simplified the matrix into a more compact form and in this compact form we’ve mixed and matched some of our notation. The partitioned matrix can be thought of as a smaller matrix with four entries, except this time each of the entries are matrices instead of numbers and so we used capital letters to represent the entries and subscripted each on with the location in portioned matrix. Be careful not to confuse the location subscripts on each of the submatrices with the size of each submatrix. In this case 11 A is a 2 1 × sub matrix of A, 12 A is a 2 2 × sub matrix of A, 21 A is a Linear Algebra © 2007 Paul Dawkins 30 3 1 × sub matrix of A, and 22 A is a 3 2 × sub matrix of A. (b) [ ] 11 12 13 21 22 23 31 32 33 1 2 3 41 42 43 51 52 53 a a a a a a A a a a a a a a a a ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ c c c In this case we partitioned A into three column matrices each representing one column in the original matrix. Again, note that we used the standard column matrix notation (the bold face letters) and subscripted each on with the location in the partitioned matrix. The i c in the partitioned matrix are sometimes called the column matrices of A. (c) 11 12 13 1 21 22 23 2 31 32 33 3 41 42 43 4 51 52 53 5 a a a a a a a a a A a a a a a a ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ r r r r r Just as we can partition a matrix into each of its columns as we did in the previous part we can also partition a matrix into each of its rows. The i r in the partitioned matrix are sometimes called the row matrices of A. The previous example showed three of the many possible ways to partition up the matrix. There are, of course, many other ways to partition this matrix. We won’t be partitioning up too many matrices here, but we will be doing it on occasion, so it’s a useful idea to remember. Also note that when we do partition up a matrix into its column/row matrices we will generally put in the bars separating the columns/rows as we’ve done here to indicate that we’ve got a partitioned matrix. To close out this section we’re going to introduce a couple of special matrices that we’ll see show up on occasion. The first matrix is the zero matrix. The zero matrix is pretty much what the name implies. It is an n m × matrix whose entries are all zeroes. The notation we’ll use for the zero matrix is 0n m × for a general zero matrix or 0 for a zero column or row matrix. Here are a couple of zero matrices just so we can say we have some in the notes. [ ] 2 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 × ⎡⎤ ⎡ ⎤ ⎢⎥ = = = ⎢ ⎥ ⎢⎥ ⎣ ⎦ ⎢⎥ ⎣⎦ 0 0 Linear Algebra © 2007 Paul Dawkins 31 If the size of a column or row zero matrix is important we will sometimes subscript the size on those as well just to make it clear what the size is. Also, if the size of a full zero matrix is not important or implied from the problem we will drop the size from 0n m × and just denote it by 0. The second special matrix we’ll look at in this section is the identity matrix. The identity matrix is a square n n × matrix usually denoted by n I or just I if the size is unimportant or clear from the context of the problem. The entries on the main diagonal of the identity matrix are all ones and all the other entries in the identity matrix are zeroes. Here are a couple of identity matrices. 2 4 1 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 I I ⎡ ⎤ ⎢ ⎥ ⎡ ⎤ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎢ ⎥ ⎣ ⎦ As we’ll see identity matrices will arise fairly regularly. Here is a nice theorem about the reduced row-echelon form of a square matrix and how it relates to the identity matrix. Theorem 1 If A is an n n × matrix then the reduced row-echelon form of the matrix will either contain at least one row of all zeroes or it will be n I , the n n × identity matrix. Proof : This is a simple enough theorem to prove that we may as well. Let’s suppose that B is the reduced row-echelon form of the matrix. If B has at least one row of all zeroes we are done so let’s suppose that B does not have a row of all zeroes. This means that every row has a leading 1 in it. Now, we know that the leading 1 of a row must be the right of the leading 1 of the row immediately above it. Because we are assuming that B is square and doesn’t have any rows of all zeroes we can actually locate each of the leading 1’s in B. First, let’s suppose that the leading 1 in the first row is NOT 11 b (i.e. 11 0 b = ). The next possible location of the leading 1 in the first row would then be 12 b . So, let’s suppose that this is where the leading 1 is. So, upon assuming this we can say that B must have the following form. 13 1 23 2 3 0 1 0 0 0 0 n n n nn b b b b B b b ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ " " # # # # " Now, let’s assume the best possible scenario happens. That is the leading 1 of each of the lower rows is exactly one column to the right of the leading 1 above it. This however, leads us to instant problems. Because our first leading 1 is in the second column by the time we reach the n-1st row our leading 1 will be in the nth column and this will in turn force the nth row to be a row of all zeroes which contradicts our initial assumption. If you’re not sure you believe this consider the 4 4 × case. Linear Algebra © 2007 Paul Dawkins 32 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Sure enough a row of all zeroes in the 4th row. Now, we assumed the best possible scenario for the leading 1’s in the lower rows and ran into problems. If the leading 1 jumps to the right say 2 columns (or 3 or 4, etc.) we will run into the same kind of problem only we’ll end up with more than one row of all zeroes. Likewise if the leading 1 in the first row is in any of 13 14 1 , , , n b b b … we will have the same problem. So, in order to meet the assumption that we don’t have any rows of all zeroes we know that the leading 1 in the first row must be at 11 b . Using a similar argument to that above we can see that if the leading 1 on any of the lower rows jumps to the right more than one column we will have a leading 1 in the nth column prior to hitting the nth row. This will in turn force at least the nth row to be a row of all zeroes which will again contradict our initial assumption. Therefore we know that the leading one in the first row is at 11 b and the only hope of not having a row of all zeroes at the bottom is to have the leading 1’s of a row be exactly one column to the right of the leading 1 of the row above it. This means that the leading 1 in the second row must be at 22 b , the leading 1 in the third row must be at 33 b , etc. Eventually we’ll hit the nth row and in this row the leading 1 must be at nn b . Therefore the leading 1’s of B must be on the diagonal and because B is the reduced row-echelon form of A we also know that all the entries above and below the leading 1’s must be zeroes. This however, is exactly n I . Therefore, if B does not have a row of all zeroes in it then we must have that n B I = . Linear Algebra © 2007 Paul Dawkins 33 Matrix Arithmetic & Operations One of the biggest impediments that some people have in learning about matrices for the first time is trying to take everything that they know about arithmetic of real numbers and translate that over to matrices. As you will eventually see much of what you know about arithmetic of real numbers will also be true here, but there is also a few ideas/facts that will no longer hold here. To make matters worse there are some rules of arithmetic of real numbers that will work occasionally with matrices but won’t work in general. So, keep this in mind as you go through the next couple of sections and don’t be too surprised when something doesn’t quite work out as you expect it to. This section is devoted mostly to developing the arithmetic of matrices as well as introducing a couple of operations on matrices that don’t really have an equivalent operation in real numbers. We will see some of the differences between arithmetic of real numbers and matrices mentioned above in this section. We will also see more of them in the next section when we delve into the properties of matrix arithmetic in more detail. Okay, let’s start off matrix arithmetic by defining just what we mean when we say that two matrices are equal. Definition 1 If A and B are both n m × matrices then we say that A = B provided corresponding entries from each matrix are equal. Or in other words, A = B provided i j i j a b = for all i and j. Matrices of different sizes cannot be equal. Example 1 Consider the following matrices. 9 123 9 9 3 7 3 7 3 b A B C − − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ = = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ For these matrices we have that A C ≠ and B C ≠ since they are different sizes and so can’t be equal. The fact that C is essentially the first column of both A and B is not important to determining equality in this case. The size of the two matrices is the first thing we should look at in determining equality. Next, A = B provided we have 123 b = . If 123 b ≠ then we will have A B ≠ . Next we need to move on to addition and subtraction of two matrices. Definition 2 If A and B are both n m × matrices then A B ± is a new n m × matrix that is found by adding/subtracting corresponding entries from each matrix. Or in other words, i j i j A B a b ⎡ ⎤ ± = ± ⎣ ⎦ Matrices of different sizes cannot be added or subtracted. Linear Algebra © 2007 Paul Dawkins 34 Example 2 For the following matrices perform the indicated operation, if possible. 2 0 2 2 0 3 2 0 4 7 2 4 9 5 1 8 10 5 12 3 7 9 6 0 6 A B C ⎡ ⎤ − − − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ = = = − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ ⎢ ⎥ − ⎣ ⎦ (a) A B + (b) B A − (c) A C + Solution (a) Both A and B are the same size and so we know the addition can be done in this case. Once we know the addition can be done there really isn’t all that much to do here other than to just add the corresponding entries here to get the results. . 2 4 10 4 11 11 17 4 A B − − ⎡ ⎤ + = ⎢ ⎥ ⎣ ⎦ (b) Again, since A and B are the same size we can do the difference and as like the previous part there really isn’t all that much to do. All that we need to be careful with is the order. Just like with real number arithmetic B A − is different from A B − . So, in this case we’ll subtract the entries of A from the entries of B. 2 4 4 0 13 5 3 14 B A − − − ⎡ ⎤ − = ⎢ ⎥ − − ⎣ ⎦ (c) In this case because A and C are different sizes the addition can’t be done. Likewise, A C − , C A − , B C + . C B − , and B C − can’t be done for the same reason. We now need to move into multiplication involving matrices. However, there are actually two kinds of multiplication to look at : Scalar Multiplication and Matrix Multiplication. Let’s start with scalar multiplication. Definition 3 If A is any matrix and c is any number then the product (or scalar multiple), cA, is a new matrix of the same size as A and it’s entries are found by multiplying the original entries of A by c. In other words i j cA ca ⎡ ⎤ = ⎣ ⎦ for all i and j. Note that in the field of Linear Algebra a number is often called a scalar and hence the name scalar multiple since we are multiplying a matrix by a scalar (number). From this point on we will generally call numbers scalars. Before doing an example we need to get another quick definition out of the way. If 1 2 , , , n A A A … are all matrices of the same size and 1 2 , , , n c c c … are scalars then the linear combination of 1 2 , , , n A A A … with coefficients 1 2 , , , n c c c … is, 1 1 2 2 n n c A c A c A + + + " This may seem like a silly thing to define but we’ll be using linear combination in quite a few places in this class and so we need to get used to seeing them. Linear Algebra © 2007 Paul Dawkins 35 Example 3 Given the matrices 0 9 8 1 2 3 2 3 7 0 2 5 1 1 4 1 10 6 A B C ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = − = − = − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ compute 1 3 2 2 A B C + − . Solution So, we’re really being asked to compute a linear combination here. We’ll do that by first computing the scalar multiplies and the performing the addition and subtraction. Note as well that in the case of the third scalar multiple we are going to consider the scalar to be a positive 1 2 and leave the minus sign out in front of the matrix. Here is the work for this problem. 3 55 2 2 5 23 2 2 0 27 16 2 1 15 1 3 2 6 9 14 0 1 7 2 3 3 8 2 5 3 0 4 A B C ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ + − = − + − −− = − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ We now need to move into matrix multiplication, however before we do the general case let’s look at a special case first since this will help with the general case. Suppose that we have the following two matrices, [ ] 1 2 1 2 n n b b a a a b ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ a b " # So, a is a row matrix and b is a column matrix and they have the same number of entries. Then the product of a and b is defined to be, 1 1 2 2 n n a b a b a b = + + + ab " It is important to note that this product can only be done if a and b have the same number of entries. If they have a different number of entries then this product is not defined. Example 4 Compute ab given that, [ ] 4 4 10 3 3 8 − ⎡ ⎤ ⎢ ⎥ = − = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ a b Solution There is not really a whole lot to do here other than use the definition given above. ( )( ) ( )( ) ( )( ) 4 4 10 3 3 8 22 = − + − + = − ab Now let’s move onto general matrix multiplication. Linear Algebra © 2007 Paul Dawkins 36 Definition 4 If A is an n p × matrix and B is a p m × matrix then the product (or matrix multiplication) is a new matrix with size n m × whose ijth entry is found by multiplying row i of A times column j of B. So, just like with addition and subtraction, we need to be careful with the sizes of the two matrices we’re dealing with. However, with multiplication we need to be a little more careful. This definition tells us that the product AB is only defined if A (i.e. the first matrix listed in the product) has the same number of columns as B (i.e. the second matrix listed in the product) has rows. If the number of columns of the first matrix listed is not the same as the number of rows of the second matrix listed then the product is not defined. An easy way to check that a product is defined is to write down the two matrices in the order that we want to multiply them and underneath them write down the sizes as shown below. A B AB n p p m n m = × × × If the two inner numbers are equal then the product is defined and the size of the product will be given by the outside numbers. Example 5 Compute AC and CA for the following two matrices, if possible. 8 5 3 1 3 0 4 3 10 2 2 5 8 9 2 0 4 1 7 5 A C ⎡ ⎤ ⎢ ⎥ − − ⎡ ⎤ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ ⎢ ⎥ − − ⎣ ⎦ Solution Okay, let’s first do AC . Here are the sizes for A and C. 2 4 4 3 2 3 A C AC = × × × So, the two inner numbers (4 and 4) are the same and so the multiplication can be done and we can see that the new size of the matrix is 2 3 × . Now, let’s actually do the multiplication. We’ll go through the first couple of entries in the product in detail and then do the remaining entries a little quicker. To get the number in the first row and first column of AC we’ll multiply the first row of A by the first column of B as follows, ( )( ) ( )( ) ( )( ) ( )( ) 1 8 3 3 0 2 4 1 13 + − − + + − = If we next want the entry in the first row and second column of AC we’ll multiply the first row of A by the second column of B as follows, ( )( ) ( )( ) ( )( ) ( )( ) 1 5 3 10 0 0 4 7 53 + − + + − = − Okay, at this point, let’s stop and insert these into the product so we can make sure that we’ve got our bearings. Here’s the product so far, Linear Algebra © 2007 Paul Dawkins 37 8 5 3 1 3 0 4 3 10 2 13 53 2 5 8 9 2 0 4 1 7 5 ⎡ ⎤ ⎢ ⎥ ⎡ ⎤ − − − ⎡ ⎤⎢ ⎥= ⎢ ⎥ ⎢ ⎥⎢ ⎥ − − − ⎣ ⎦ ⎢ ⎥ ⎣ ⎦ ⎢ ⎥ − − ⎣ ⎦ As we can see we’ve got four entries left to compute. For these we’ll give the row and column multiplications but leave it to you to make sure we used the correct row/column and put the result in the correct place. Here’s the remaining work. ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) 1 3 3 2 0 4 4 5 17 2 8 5 3 8 2 9 1 56 2 5 5 10 8 0 9 7 23 2 3 5 2 8 4 9 5 81 + − + − + = − + − + − + − = − − + + − + − = − − + + − − + = Here is the completed product. 8 5 3 1 3 0 4 3 10 2 13 53 17 2 5 8 9 2 0 4 56 23 81 1 7 5 ⎡ ⎤ ⎢ ⎥ − − − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥= ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − − − ⎣ ⎦ ⎣ ⎦ ⎢ ⎥ − − ⎣ ⎦ Now let’s do CA. Here are the sizes for this product. 4 3 2 4 N/A C A CA = × × Okay, in this case the two inner numbers (3 and 2) are NOT the same and so this product can’t be done. So, with this example we’ve now run across the first real difference between real number arithmetic and matrix arithmetic. When dealing with real numbers the order in which we write a product doesn’t affect the actual result. For instance (2)(3)=6 and (3)(2)=6. We can flip the order and we get the same answer. With matrices however, we will have to be very careful and pay attention to the order in which the product is written down. As this example has shown the product AC could be computed while the product CA in not defined. Now, do not take the previous example and assume that all products will work that way. It is possible for both AC and CA to be defined as we’ll see in the next example. Example 6 Compute BD and DB for the given matrices, if possible. 3 1 7 1 4 9 10 1 8 6 2 1 5 2 4 7 4 7 B D − − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = − = − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ Solution First, notice that both of these matrices are 3 3 × matrices and so both BD and DB are defined. Again, it’s worth pointing out that this example differs from the previous example in that both the products are defined in this example rather than only one being defined as in the previous Linear Algebra © 2007 Paul Dawkins 38 example. Also note that in both cases the product will be a new 3 3 × matrix. In this example we’re going to leave the work of verifying the products to you. It is good practice so you should try and verify at least one of the following products. 3 1 7 1 4 9 40 38 77 10 1 8 6 2 1 60 10 33 5 2 4 7 4 7 45 0 19 1 4 9 3 1 7 8 23 3 6 2 1 10 1 8 43 6 22 7 4 7 5 2 4 26 11 45 BD DB − − ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = − − = − ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦⎣ ⎦ ⎣ ⎦ − − − − ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = − − = − ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − ⎣ ⎦⎣ ⎦ ⎣ ⎦ This example leads us to yet another difference (although it’s related to the first) between real number arithmetic and matrix arithmetic. In this example both BD and DB were defined. Notice however that the products were definitely not the same. There is nothing wrong with this so don’t get excited about it when it does happen. Note however that this doesn’t mean that the two products will never be the same. It is possible for them to be the same and we’ll see at least one case where the two products are the same in a couple of sections. For the sake of completeness if A is an n p × matrix and B is a p m × matrix then the entry in the ith row and jth column of AB is given by the following formula, ( ) 1 1 2 2 3 3 i j i j i j i p p j i j AB a b a b a b a b = + + + + " This formula can be useful on occasion, but is really used mostly in proofs and computer programs that compute the product of matrices. On occasion it can be convenient to know a single row or a single column from a product and not the whole product itself. The following theorem tells us how to get our hands on just that. Theorem 1 Assuming that A and B are appropriately sized so that AB is defined then, 1. The ith row of AB is given by the matrix product : [ith row of A]B. 2. The jth column of AB is given by the matrix product : A[jth column of B]. Example 7 Compute the second row and third column of AC given the following matrices. 8 5 3 1 3 0 4 3 10 2 2 5 8 9 2 0 4 1 7 5 A C ⎡ ⎤ ⎢ ⎥ − − ⎡ ⎤ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ ⎢ ⎥ − − ⎣ ⎦ Solution These are the matrices from Example 5 and so we can verify the results of using this fact once we’re done. Let’s find the second row first. So, according to the fact this means we need to multiply the second row of A by C. Here is that work. Linear Algebra © 2007 Paul Dawkins 39 [ ] [ ] 8 5 3 3 10 2 2 5 8 9 56 23 81 2 0 4 1 7 5 ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ − − = − − ⎢ ⎥ − ⎢ ⎥ − − ⎣ ⎦ Sure enough, this is the correct second row of the product AC. Next, let’s use the fact to get the third column. This means that we’ll need to multiply A by the third column of B. Here is that work. 3 1 3 0 4 2 17 2 5 8 9 4 81 5 ⎡ ⎤ ⎢ ⎥ − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥= ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ ⎣ ⎦ ⎢ ⎥ ⎣ ⎦ And sure enough, this also gives us the correct answer. We can use this fact about how to get individual rows or columns of a product as well as the idea of a partitioned matrix that we saw in the previous section to derive a couple of new ways to find the product of two matrices. Let’s start by assuming we’ve got two matrices A (size n p × ) and B (size p m × ) so we know the product AB is defined. Now, for the first new way of finding the product let’s partition A into its row matrices as follows, 11 12 1 1 21 22 2 2 1 2 p p n n np n a a a a a a A a a a ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ r r r " " # # # # " Now, from the fact we know that the ith row of AB is [ith row of A]B, or iB r . Using this idea the product AB can then be written as a new partitioned matrix as follows. 1 1 2 2 n n B B AB B B ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ r r r r r r # # For the second new way of finding the determinate we’ll partition B into its column matrices as, [ ] 11 12 1 21 22 2 1 2 1 2 m m m p p pm b b b b b b B b b b ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ c c c " " " # # # " We can then use the fact that t he jth column of AB is given by A[jth column of B] and so the product AB can be written as a new partitioned matrix as follows. Linear Algebra © 2007 Paul Dawkins 40 [ ] [ ] 1 2 1 2 m m AB A A A A = = c c c c c c " " Example 8 Use both of the new methods for computing products to find AC for the following matrices. 8 5 3 1 3 0 4 3 10 2 2 5 8 9 2 0 4 1 7 5 A C ⎡ ⎤ ⎢ ⎥ − − ⎡ ⎤ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ ⎢ ⎥ − − ⎣ ⎦ Solution So, once again we know the answer to this so we can use it to check our results against the answer from Example 5. First, let’s use the row matrices of A. Here are the two row matrices of A. [ ] [ ] 1 2 1 3 0 4 2 5 8 9 = − = − − r r and here are the rows of the product. [ ] [ ] [ ] [ ] 1 2 8 5 3 3 10 2 1 3 0 4 13 53 17 2 0 4 1 7 5 8 5 3 3 10 2 2 5 8 9 56 23 81 2 0 4 1 7 5 C C ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ = − = − ⎢ ⎥ − ⎢ ⎥ − − ⎣ ⎦ ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ = − − = − − ⎢ ⎥ − ⎢ ⎥ − − ⎣ ⎦ r r Putting these together gives, 1 2 13 53 17 56 23 81 C AC C − ⎡ ⎤ ⎡ ⎤ = = ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ r r and this is the correct answer. Now let’s compute the product using columns. Here are the three column matrices for C. 1 2 3 8 5 3 3 10 2 2 0 4 1 7 5 ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ c c c Here are the columns of the product. 1 8 1 3 0 4 3 13 2 5 8 9 2 56 1 A ⎡ ⎤ ⎢ ⎥ − − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ ⎣ ⎦ ⎢ ⎥ − ⎣ ⎦ c Linear Algebra © 2007 Paul Dawkins 41 2 5 1 3 0 4 10 53 2 5 8 9 0 23 7 A ⎡ ⎤ ⎢ ⎥ − − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ ⎣ ⎦ ⎢ ⎥ − ⎣ ⎦ c 3 3 1 3 0 4 2 17 2 5 8 9 4 81 5 A ⎡ ⎤ ⎢ ⎥ − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ ⎣ ⎦ ⎢ ⎥ ⎣ ⎦ c Putting all this together as follows gives the correct answer. [ ] 1 2 3 13 53 17 56 23 81 AB A A A − ⎡ ⎤ = = ⎢ ⎥ − − ⎣ ⎦ c c c We can also write certain kinds of matrix products as a linear combination of column matrices. Consider A an n p × matrix and x a 1 p× column matrix. We can easily compute this product directly as follows, 11 12 1 1 11 1 12 2 1 21 22 2 2 21 1 22 2 2 1 2 1 1 2 2 1 p p p p p p n n np p n n np p n a a a x a x a x a x a a a x a x a x a x A a a a x a x a x a x × + + + ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ + + + ⎢ ⎥⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ + + + ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ x " " " " # # # # # " " Now, using matrix addition we can write the resultant 1 n× matrix as follows, 11 1 12 2 1 1 11 1 12 2 21 1 22 2 2 2 21 1 22 2 1 1 2 2 1 1 2 2 p p p p p p p p n n np p np p n n a x a x a x a x a x a x a x a x a x a x a x a x a x a x a x a x a x a x + + + ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ + + + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = + + + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ + + + ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ " " " # # # # " Now, each of the p column matrices on the right above can also be rewritten as a scalar multiple as follows. 1 1 11 1 12 2 11 12 2 2 21 1 22 2 21 22 1 2 1 1 2 2 1 2 p p p p p p p np p np n n n n a x a a x a x a a a x a a x a x a a x x x a x a a x a x a a ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ + + + = + + + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ " " # # # # # # Finally, the column matrices that are multiplied by the i x ’s are nothing more than the column matrices of A. So, putting all this together gives us, 1 11 12 2 21 22 1 2 1 1 2 2 1 2 p p p p p np n n a a a a a a A x x x x x x a a a ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = + + + = + + + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ x c c c " " # # # Linear Algebra © 2007 Paul Dawkins 42 where 1 2 , , , p c c c … are the column matrices of A. Written in this matter we can see that Ax can be written as the linear combination of the column matrices of A, 1 2 , , , p c c c … , with the entries of x , 1 2 , , p x x x … , as coefficients. Example 9 Compute Ax directly and as a linear combination for the following matrices. 2 4 1 2 1 1 12 1 3 2 6 0 5 10 9 8 A ⎡ ⎤ − ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ = − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ x Solution We’ll leave it to you to verify that the direct computation of the product gives, 2 4 1 2 1 11 1 12 1 3 2 9 6 0 5 10 9 17 8 A ⎡ ⎤ − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ x Here is the linear combination method of computing the product. 2 4 1 2 1 1 12 1 3 2 6 0 5 10 9 8 4 1 2 1 2 12 1 1 6 3 8 2 0 5 10 9 8 1 12 8 24 1 18 16 0 5 60 72 11 9 17 A ⎡ ⎤ − ⎡ ⎤⎢ ⎥ − ⎢ ⎥⎢ ⎥ = − ⎢ ⎥⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥ ⎣ ⎦⎣ ⎦ − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = − − + + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = − − + + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ x This is the same result that we got by the direct computation. Matrix multiplication also gives us a very nice and compact way of writing systems of equations. In fact we even saw most of it as we introduced the above idea. Let’s start out with a general system of n equations and m unknowns. Linear Algebra © 2007 Paul Dawkins 43 11 1 12 2 1 1 21 1 22 2 2 2 1 1 2 2 m m m m n n nm m n a x a x a x b a x a x a x b a x a x a x b + + + = + + + = + + + = " " # " Now, instead of thinking of these as a set of equations let’s think of each side as a vector of size 1 n× as follows, 11 1 12 2 1 1 21 1 22 2 2 2 1 1 2 2 m m m m n n nm m n a x a x a x b a x a x a x b a x a x a x b + + + ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ + + + ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ + + + ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ " " # # " In the work above we saw that the left side of this can be written as the following matrix product, 11 12 1 1 1 21 22 2 2 2 1 2 m m n n nm m n a a a x b a a a x b a a a x b ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ " " # # # # # " If we now denote the coefficient matrix by A, the column matrix containing the unknowns by x and the column matrix containing the i b ’s by b. we can write the system in the following matrix form, A = x b In many of the section to follow we’ll write general systems of equations as A = x b given its compact nature in order to save space. Now that we’ve gotten the basics of matrix arithmetic out of the way we need to introduce a couple of matrix operations that don’t really have any equivalent operations with real numbers. Definition 5 If A is an n m × matrix then the transpose of A, denoted by T A , is an m n × matrix that is obtained by interchanging the rows and columns of A. So, the first row of T A is the first column of A, the second row of T A is the second column of A, etc. Likewise, the first column of T A is the first row of A, the second column of T A is the second row of A, etc. On occasion you’ll see the transpose defined as follows, for all and T i j ji n m m n A a A a i j × × ⎡ ⎤ ⎡ ⎤ = ⇒ = ⎣ ⎦ ⎣ ⎦ Notice the difference in the subscripts. Under this definition, the entry in the ith row and jth column of A will be in the jth row and ith column of T A . Notice that these two definitions are really the same definition, they just don’t look like they are the same at first glance. Definition 6 If A is a square matrix of size n n × then the trace of A, denoted by tr(A), is the sum of the entries on main diagonal. Or, ( ) 11 22 tr nn A a a a = + + + " If A is not square then the trace is not defined. Linear Algebra © 2007 Paul Dawkins 44 Example 10 Determine the transpose and trace (if it is defined) for each of the following matrices. [ ] 3 2 6 9 4 10 7 0 9 1 7 1 5 1 3 2 5 0 12 8 12 7 15 7 10 A B C D E − ⎡ ⎤ ⎡ ⎤ − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = = − − = − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ − − ⎡ ⎤ = = ⎢ ⎥ − ⎣ ⎦ Solution There really isn’t all that much to do here other than to go through the definitions. Note as well that the trace will only not be defined for A and C since these matrices are not square. ( ) 4 5 10 1 tr : Not defined since is not square. 7 3 0 2 T A A A ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ − ⎢ ⎥ − ⎣ ⎦ ( ) 3 9 5 2 1 0 tr 3 1 12 16 6 7 12 T B B − ⎡ ⎤ ⎢ ⎥ = = + + = ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ [ ] ( ) 9 1 8 tr : Not defined since is not square. T C c C = − [ ] ( ) 15 tr 15 T D D = = ( ) 12 7 tr 12 10 2 7 10 T E E − − ⎡ ⎤ = = − + = − ⎢ ⎥ − ⎣ ⎦ In the previous example note that T D D = and that T E E = . In these cases the matrix is called symmetric. So, in the previous example D and E are symmetric while A, B, and C, are not symmetric. Linear Algebra © 2007 Paul Dawkins 45 Properties of Matrix Arithmetic and the Transpose In this section we’re going to take a quick look at some of the properties of matrix arithmetic and of the transpose of a matrix. As mentioned in the previous section most of the basic rules of real number arithmetic are still valid in matrix arithmetic. However, there are a few that are no longer valid in matrix arithmetic as we’ll be seeing. We’ve already seen one of the real number properties that doesn’t hold in matrix arithmetic. If a and b are two real numbers then we know by the commutative law for multiplication of real numbers that ab ba = (i.e. (2)(3)=(3)(2)=6 ). However, if A and B are two matrices such that AB is defined we saw an example in the previous section in which BA was not defined as well as an example in which BA was defined and yet AB BA ≠ . In other words, we don’t have a commutative law for matrix multiplication. Note that doesn’t mean that we’ll never have AB BA = for some matrices A and B, it is possible for this to happen (as we’ll see in the next section) we just can’t guarantee that this will happen if both AB and BA are defined. Now, let’s take a quick look at the properties of real number arithmetic that are valid in matrix arithmetic. Properties In the following set of properties a and b are scalars and A, B, and C are matrices. We’ll assume that the size of the matrices in each property are such that the operation in that property is defined. 1. A B B A + = + Commutative law for addition 2. ( ) ( ) A B C A B C + + = + + Associative law for addition 3. ( ) ( ) A BC AB C = Associative law for multiplication 4. ( ) A B C AB AC ± = ± Left distributive law 5. ( ) B C A BA CA ± = ± Right distributive law 6. ( ) a B C aB aC ± = ± 7. ( ) a b C aC bC ± = ± 8. ( ) ( ) ab C a bC = 9. ( ) ( ) ( ) a BC aB C B aC = = With real number arithmetic we didn’t need both 4. and 5. since we’ve also got the commutative law for multiplication. However, since we don’t have the commutative law for matrix multiplication we really do need both 4. and 5. Also, properties 6. – 9. are simply distributive or associative laws for dealing with scalar multiplication. Now, let’s take a look at couple of other idea from real number arithmetic and see if they have equivalent ideas in matrix arithmetic. We’ll start with the following idea. From real number arithmetic we know that 1 1 a a a ⋅ = ⋅= . Or, in other words, if we multiply a number by 1 (one) doesn’t change the number. The identity matrix will give the same result in matrix multiplication. If A is an n m × matrix then we have, Linear Algebra © 2007 Paul Dawkins 46 n m I A AI A = = Note that we really do need different identity matrices on each side of A that will depend upon the size of A. Example 1 Consider the following matrix. 10 0 3 8 1 11 7 4 A ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ − ⎢ ⎥ − ⎣ ⎦ Then, 4 2 1 0 0 0 10 0 10 0 0 1 0 0 3 8 3 8 0 0 1 0 1 11 1 11 0 0 0 1 7 4 7 4 10 0 10 0 3 8 1 0 3 8 1 11 0 1 1 11 7 4 7 4 I A AI ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ Now, just like the identity matrix takes the place of the number 1 (one) in matrix multiplication, the zero matrix (denoted by 0 for a general matrix and 0 for a column/row matrix) will take the place of the number 0 (zero) in most of the matrix arithmetic. Note that we said most of the matrix arithmetic. There are a couple of properties involving 0 in real numbers that are not necessarily valid in matrix arithmetic. Let’s first start with the properties that are still valid. Zero Matrix Properties In the following properties A is a matrix and 0 is the zero matrix sized appropriately for the indicated operation to be valid. 1. 0 0 A A A + = + = 2. 0 A A − = 3. 0 A A − = − 4. 0 0 A = and 0 0 A = Now, in real number arithmetic we know that if ab ac = and 0 a ≠ then we must have b c = (sometimes called the cancellation law). We also know that if 0 ab = then we have 0 a = and/or 0 b = (sometimes called the zero factor property). Neither of these properties of real number arithmetic are valid in general for matrix arithmetic. Linear Algebra © 2007 Paul Dawkins 47 Example 2 Consider the following three matrices. 3 2 1 2 1 4 6 4 3 2 6 1 A B C − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ = = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ We’ll leave it to you to verify that, 9 10 18 20 AB AC − ⎡ ⎤ = = ⎢ ⎥ − ⎣ ⎦ Clearly 0 A ≠ and just as clearly B C ≠ and yet we do have AB AC = . So, at least in this case, the cancellation law does not hold. We should be careful and not read too much into the results of the previous example. The cancellation law will not be valid in general for matrix multiplication. However, there are times when a variation of the cancellation law will be valid as we’ll see in the next section. Example 3 Consider the following two matrices. 1 2 16 2 2 4 8 1 A B − ⎡ ⎤ ⎡ ⎤ = = ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ We’ll leave it to you to verify that, 0 0 0 0 AB ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ So, we’ve got 0 AB = despite the fact that 0 A ≠ and 0 B ≠ . So, in this case the zero factor property does not hold in this case. Now, again, we need to be careful. There are times when we will have a variation of the zero factor property, however there will be no zero factor property for the multiplication of any two random matrices. The next topic that we need to take a look at is that of powers of matrices. At this point we’ll just work with positive exponents. We’ll need the next section before we can deal with negative exponents. Let’s start off with the following definitions. Definition 1 If A is a square matrix then, 0 times , 0 n n A I A AA A n = = > "  We’ve also got several of the standard integer exponent properties that we are used to working with. Properties of Matrix Exponents If A is a square matrix and n and m are integers then, ( ) m n m n m n nm A A A A A + = = We can also talk about plugging matrices into polynomials using the following definition. If we have the polynomial, Linear Algebra © 2007 Paul Dawkins 48 ( ) 1 1 1 0 n n n n p x a x a x a x a − − = + + + + " and A is a square matrix then, ( ) 1 1 1 0 n n n n p A a A a A a A a I − − = + + + + " where the identity matrix on the constant term 0 a has the same size as A. Example 4 Evaluate each of the following for the give matrix. 7 3 5 1 A − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ (a) 2 A (b) 3 A (c) ( ) p A where ( ) 3 6 10 9 p x x x = − + − Solution (a) There really isn’t much to do with this problem. We’ll leave it to you to verify the multiplication here. 2 7 3 7 3 64 18 5 1 5 1 30 16 A − − − ⎡ ⎤⎡ ⎤ ⎡ ⎤ = = ⎢ ⎥⎢ ⎥ ⎢ ⎥ − ⎣ ⎦⎣ ⎦ ⎣ ⎦ (b) In this case we may as well take advantage of the fact that we’ve got the result from the first part already. Again, we’ll leave it to you to verify the multiplication. 3 2 64 18 7 3 538 174 30 16 5 1 290 74 A A A − − − ⎡ ⎤⎡ ⎤ ⎡ ⎤ = = = ⎢ ⎥⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦⎣ ⎦ ⎣ ⎦ (c) In this case we’ll need the result from the second part. Outside of that there really isn’t much to do here. ( ) 3 6 10 9 538 174 7 3 1 0 6 10 9 290 74 5 1 0 1 538 174 7 3 1 0 6 10 9 290 74 5 1 0 1 3228 1044 70 30 9 0 1740 444 50 10 0 9 3149 1014 1690 445 p A A A I = − + − − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ = − + − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ = − + − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ = + − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ − ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ The last topic in this section that we need to take care of is some quick properties of the transpose of a matrix. Linear Algebra © 2007 Paul Dawkins 49 Properties of the Transpose If A and B are matrices whose sizes are such that the given operations are defined and c is any scalar then, 1. ( ) T T A A = 2. ( ) T T T A B A B ± = ± 3. ( ) T T cA cA = 4. ( ) T T T AB B A = The first three of these properties should be fairly obvious from the definition of the transpose. The fourth is a little trickier to see, but isn’t that bad to verify. Proof of #4 : We know that the entry in the ith row and jth column of AB is given by, ( ) 1 1 2 2 3 3 i j i j i j i p p j i j AB a b a b a b a b = + + + + " We also know that the entry in the ith row and jth column of ( ) T AB is found simply by interchanging the subscripts i and j and so it is, ( ) ( ) ( ) 1 1 2 2 3 3 T j i j i j i j p pi ji i j AB AB a b a b a b a b = = + + + + " Now, let’s denote the entries of T A and T B as i j a and i j b respectively. Again, based on the definition of the transpose we also know that, T T i j ji i j ji A a a B b b ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ = = = = ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ and so from this we see that i j ji a a = and i j ji b b = . Finally, the entry in the ith row and jth column of T T B A is given by, ( ) 1 1 2 2 3 3 T T i j i j i j i p p j i j B A b a b a b a b a = + + + + " Now, plug in for i j a and i j b and we get that, ( ) ( ) ( ) 1 1 2 2 3 3 1 1 2 2 3 3 1 1 2 2 3 3 T T i j i j i j i p p j i j i j i j i j pi j p T j i j i j i j p pi i j B A b a b a b a b a b a b a b a b a a b a b a b a b AB = + + + + = + + + + = + + + + = " " " So, just what have we done here? We’ve managed to show that the entry in the ith row and jth column of ( ) T AB is equal to the entry in the ith row and jth column of T T B A . Therefore, since each of the entries are equal the matrices must also be equal. Note that #4 can be naturally extended to more than two matrices. For example, ( ) T T T T ABC C B A = Linear Algebra © 2007 Paul Dawkins 50 Inverse Matrices and Elementary Matrices Our main goal in this section is define inverse matrices and to take a look at some nice properties involving matrices. We won’t actually be finding any inverse matrices in this section. That is the topic of the next section. We’ll also take a quick look at elementary matrices which as we’ll see in the next section we can use to help us find inverse matrices. Actually, that’s not totally true. We’ll use them to help us devise a method for finding inverse matrices, but we won’t be explicitly using them to find the inverse. So, let’s start off with the definition of the inverse matrix. Definition 1 If A is a square matrix and we can find another matrix of the same size, say B, such that AB BA I = = then we call A invertible and we say that B is an inverse of the matrix A. If we can’t find such a matrix B we call A a singular matrix. Note that we only talk about inverse matrices for square matrices. Also note that if A is invertible it will on occasion be called non-singular. We should also point out that we could also say that B is invertible and that A is the inverse of B. Before proceeding we need to show that the inverse of a matrix is unique, that is for a given invertible matrix A there is exactly one inverse for the matrix. Theorem 1 Suppose that A is invertible and that both B and C are inverses of A. Then B C = and we will denote the inverse as 1 A−. Proof : Since B is an inverse of A we know that AB I = . Now multiply both sides of this by C to get ( ) C AB CI C = = . However, by the associative law of matrix multiplication we can also write ( ) C AB as ( ) ( ) C AB CA B I B B = = = . Therefore, putting these two pieces together we see that ( ) C C AB B = = or C B = . So, the inverse for a matrix is unique. To denote this fact we now will denote the inverse of the matrix A as 1 A− from this point on. Example 1 Given the matrix A verify that the indicated matrix is in fact the inverse. 1 1 2 5 1 1 2 2 5 4 2 5 5 A A− − − − − ⎡ ⎤ ⎡ ⎤ = = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ Solution To verify that we do in fact have the inverse we’ll need to check that 1 1 AA A A I − − = = This is easy enough to do and so we’ll leave it to you to verify the multiplication. Linear Algebra © 2007 Paul Dawkins 51 1 1 2 5 1 1 2 2 5 1 1 2 5 1 1 2 2 5 4 2 1 0 5 5 0 1 4 2 1 0 5 5 0 1 AA A A − − − − − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ − − − − ⎡ ⎤⎡ ⎤ ⎡ ⎤ = = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ As the definition of an inverse matrix suggests, not every matrix will have an inverse. Here is an example of a matrix without an inverse. Example 2 The matrix below does not have an inverse. 3 9 2 0 0 0 4 5 1 B ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ This is fairly simple to see. If B has a matrix then it must be a 3 3 × matrix. So, let’s just take any old 3 3 × , 11 12 13 21 22 23 31 32 33 c c c C c c c c c c ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Now let’s think about the product BC. We know that the 2nd row of BC can be found by looking at the following matrix multiplication, [ ] [ ] 11 12 13 21 22 23 31 32 33 2 row of 0 0 0 0 0 0 nd c c c B C c c c c c c ⎡ ⎤ ⎢ ⎥ ⎡ ⎤ = = ⎢ ⎥ ⎣ ⎦ ⎢ ⎥ ⎣ ⎦ So, the second row of BC is [ ] 0 0 0 , but if C is to be the inverse of B the product BC must be the identity matrix and this means that the second row must in fact be [ ] 0 1 0 . Now, C was a general 3 3 × matrix and we’ve shown that the second row of BC is all zeroes and hence the product will never be the identity matrix and so B can’t have an inverse and so is a singular matrix. In the previous section we introduced the idea of matrix exponentiation. However, we needed to restrict ourselves to positive exponents. We can now take a look at negative exponents. Definition 2 If A is a square matrix and 0 n > then, ( ) 1 1 1 1 times n n n A A A A A − − − − − = = "  Linear Algebra © 2007 Paul Dawkins 52 Example 3 Compute 3 A− for the matrix, 4 2 5 5 A − − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ Solution From Example 1 we know that the inverse of A is, 1 1 2 5 1 1 2 2 5 A− − − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ So, this is easy enough to compute. ( ) 1 1 1 1 1 1 3 2 5 2 5 2 5 3 1 1 2 1 2 1 2 2 5 2 5 2 5 3 1 1 1 2 5 20 50 3 1 2 1 2 5 20 50 13 11 200 500 17 11 200 500 A A − − − − − − − − ⎡ ⎤⎡ ⎤⎡ ⎤ = = ⎢ ⎥⎢ ⎥⎢ ⎥ ⎣ ⎦⎣ ⎦⎣ ⎦ − − ⎡ ⎤⎡ ⎤ = ⎢ ⎥⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ − − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ Next, let’s take a quick look at some nice facts about the inverse matrix. Theorem 2 Suppose that A and B are invertible matrices of the same size. Then, (a) AB is invertible and ( ) 1 1 1 AB B A − − − = . (b) 1 A− is invertible and ( ) 1 1 A A − − = . (c) For 0,1,2, n = … n A is invertible and ( ) ( ) 1 1 n n n A A A − − − = = . (d) If c is any non-zero scalar then cA is invertible and ( ) 1 1 1 cA A c − − = (e) T A is invertible and ( ) ( ) 1 1 T T A A − − = . Proof : Note that in each case in order to prove that the given matrix is invertible all we need to do is show that the inverse is what we claim it to be. Also, don’t get excited about showing that the inverse is what we claim it to be. In these cases all we need to do is show that the product (both left and right product) of the given matrix and what we claim is the inverse is the identity matrix. That’s it. Also, do not get excited about the inverse notation. For example, in the first one we state that ( ) 1 1 1 AB B A − − − = . Remember that the ( ) 1 AB − is just the notation that we use to denote the inverse of AB. This notation will not be used in the proof except in the final step to denote the inverse. (a) Now, as suggested above showing this is not really all that difficult. All we need to do is show that ( )( ) 1 1 AB B A I − − = and ( )( ) 1 1 B A AB I − − = . Here is that work. Linear Algebra © 2007 Paul Dawkins 53 ( )( ) ( ) ( )( ) ( ) 1 1 1 1 1 1 1 1 1 1 1 1 AB B A A BB A AIA AA I B A AB B A A B B IB B B I − − − − − − − − − − − − = = = = = = = = So, we’ve shown both and so we now know that AB is in fact invertible (since we’ve found the inverse!) and that ( ) 1 1 1 AB B A − − − = . (b) Now, we know from the fact that A is invertible that 1 1 AA A A I − − = = But this is telling us that if we multiply 1 A− by A on both sides then we’ll get the identity matrix. But this is exactly what we need to show that 1 A− is invertible and that its inverse is A. (c) The best way to prove this part is by a proof technique called induction. However, there’s a chance that a good many of you don’t know that and that isn’t the point of this class. Luckily, for this part anyway, we can at least outline another way to prove this. To officially prove this part we’ll need to show that ( )( ) ( )( ) n n n n A A A A I − − = = . We’ll show one of the inequalities and leave the other to you to verify since the work is pretty much identical. ( )( ) ( ) ( ) 1 1 1 times times 1 1 1 1 1 1 times 1 times 1 1 1 1 times 1 times 1 but so, . n n n n n n n n A A AA A A A A AA A AA A A A AA I AA A A A A etc AA A A − − − − − − − − − − − − − − − − − ⎛ ⎞⎛ ⎞ = ⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ ⎛ ⎞ ⎛ ⎞ = = ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ⎛ ⎞⎛ ⎞ = ⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ = = " "   " "   " "   ( ) ( ) 1 1 1 1 1 again so, A AA A AA I AA I − − − − − = = = = Again, we’ll leave the second product to you to verify, but the work is identical. After doing this product we can see that n A is invertible and ( ) ( ) 1 1 n n n A A A − − − = = . (d) To prove this part we’ll need to show that ( ) ( ) 1 1 1 1 cA A A cA I c c − − ⎛ ⎞ ⎛ ⎞ = = ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ . As with the last part we’ll do half the work and leave the other half to you to verify. ( ) ( ) ( )( ) 1 1 1 1 1 cA A c AA I I c c − − ⎛ ⎞ ⎛ ⎞ = ⋅ = = ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ Linear Algebra © 2007 Paul Dawkins 54 Upon doing the second product we can see that cA is invertible and ( ) 1 1 1 cA A c − − = . (e) The part will require us to show that ( ) ( ) 1 1 T T T T A A A A I − − = = and in keeping with tradition of the last couple parts we’ll do the first one and leave the second one to you to verify. This one is a little tricky at first, but once you realize the correct formula to use it’s not too bad. Let’s start with ( ) 1 T T A A− and then remember that ( ) T T T C D D C = . Using this fact (backwards) on ( ) 1 T T A A− gives us, ( ) ( ) 1 1 T T T T A A A A I I − − = = = Note that we used the fact that T I I = here which we’ll leave to you to verify. So, upon showing the second product we’ll have that T A is invertible and ( ) ( ) 1 1 T T A A − − = . Note that the first part of this theorem can be easily extended to more than two matrices as follows, ( ) 1 1 1 1 ABC C B A − − − − = Now, in the previous section we saw that in general we don’t have a cancellation law or a zero factor property. However, if we restrict ourselves just a little we can get variations of both of these. Theorem 3 Suppose that A is an invertible matrix and that B, C, and D are matrices of the same size as A. (a) If AB AC = then B C = (b) If 0 AD = then 0 D = Proof : (a) Since we know that A is invertible we know that 1 A− exists so multiply on the left by 1 A− to get, 1 1 A AB A AC IB IC B C − − = = = (b) Again we know that 1 A− exists so multiply on the left by 1 A− to get, 1 10 0 0 A AD A ID D − − = = = Note that this theorem only required that A be invertible, it is completely possible that the other matrices are singular. Linear Algebra © 2007 Paul Dawkins 55 Note as well with the first one that we’ve got to remember that matrix multiplication is not commutative and so if we have AB CA = then there is no reason to think that B C = even if A is invertible. Because we don’t know that CA AC = we’ve got to leave this as is. Also when we multiply both sides of the equation by 1 A− we’ve got multiply each side on the left or each side on the right, which is again because we don’t have the commutative law with matrix multiplication. So, if we tried the above proof on AB CA = we’d have, 1 1 1 1 1 1 OR A AB A CA ABA CAA B A CA ABA C − − − − − − = = = = In either case we don’t have B C = . Okay, it is now time to take a quick look at Elementary matrices. Definition 3 A square matrix is called an elementary matrix if it can be obtained by applying a single elementary row operation to the identity matrix of the same size. Here are some examples of elementary matrices and the row operations that produced them. Example 4 The following matrices are all elementary matrices. Also give is the row operation on the appropriately sized identity matrix. 1 2 9 0 9 on 0 1 R I ⎡ ⎤ ⎢ ⎥ ⎣ ⎦ 1 4 4 0 0 0 1 0 1 0 0 on 0 0 1 0 1 0 0 0 R R I ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ↔ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ 2 3 4 1 0 0 0 0 1 7 0 7 on 0 0 1 0 0 0 0 1 R R I ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ 2 3 1 0 0 0 1 0 1 on 0 0 1 R I ⎡ ⎤ ⎢ ⎥ ⋅ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Note that the fourth example above shows that any identity matrix is also an elementary matrix since we can think of arriving at that matrix by taking one times any row (not just the second as we used) of the identity matrix. Here’s a really nice theorem about elementary matrices that we’ll be using extensively to develop a method for finding the inverse of a matrix. Linear Algebra © 2007 Paul Dawkins 56 Theorem 4 Suppose E is an elementary matrix that was found by applying an elementary row operation to n I . Then if A is an n m × matrix EA is the matrix that will result by applying the same row operation to A. Example 5 For the following matrix perform the row operation 1 2 4 R R + on it and then find the elementary matrix, E, for this operation and verify that EA will give the same result. 4 5 6 1 1 1 2 1 10 3 3 0 4 4 7 A − − ⎡ ⎤ ⎢ ⎥ = − − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Solution Performing the row operation is easy enough. 1 2 4 5 6 1 1 0 13 10 41 11 4 1 2 1 10 3 1 2 1 10 3 3 0 4 4 7 3 0 4 4 7 R R − − − ⎡ ⎤ ⎡ ⎤ + ⎢ ⎥ ⎢ ⎥ − − − − ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ Now, we can find E simply by applying the same operation to 3 I and so we have, 1 4 0 0 1 0 0 0 1 E ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ We just need to verify that EA is then the same matrix that we got above. 1 4 0 4 5 6 1 1 0 13 10 41 11 0 1 0 1 2 1 10 3 1 2 1 10 3 0 0 1 3 0 4 4 7 3 0 4 4 7 EA − − − ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = − − = − − ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦⎣ ⎦ ⎣ ⎦ Sure enough the same matrix as the theorem predicted. Now, let’s go back to Example 4 for a second and notice that we can apply a second row operation to get the given elementary matrix back to the original identity matrix. Example 6 Give the operation that will take the elementary matrices from Example 4 back to the original identity matrix. 1 1 9 9 0 1 0 0 1 0 1 R ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ → ⎣ ⎦ ⎣ ⎦ 1 4 0 0 0 1 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 R R ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ↔ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ Linear Algebra © 2007 Paul Dawkins 57 2 3 1 0 0 0 1 0 0 0 0 1 7 0 7 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 R R ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ 2 1 0 0 1 0 0 1 0 1 0 0 1 0 0 0 1 0 0 1 R ⎡ ⎤ ⎡ ⎤ ⋅ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ These kinds of operations are called inverse operations and each row operation will have an inverse operation associated with it. The following table gives the inverse operation for each row operation. Row operation Inverse Operation Multiply row i by 0 c ≠ Multiply row i by 1 c Interchange rows i and j Interchange rows i and j Add c times row i to row j Add c − times row i to row j Now that we’ve got inverse operations we can give the following theorem. Theorem 5 Suppose that E is the elementary matrix associated with a particular row operation and that 0 E is the elementary matrix associated with the inverse operation. Then E is invertible and 1 0 E E −= Proof : This is actually a really simple proof. Let’s start with 0 E E . We know from Theorem 4 that this is the same as if we’d applied the inverse operation to E, but we also know that inverse operations will take an elementary matrix back to the original identity matrix. Therefore we have, 0 E E I = Likewise, if we look at 0 EE this will be the same as applying the original row operation to 0 E . However, if you think about it this will only undo what the inverse operation did to the identity matrix and so we also have, 0 EE I = Therefore, we’ve proved that 0 0 EE E E I = = and so E is invertible and 1 0 E E −= . Now, suppose that we’ve got two matrices of the same size A and B. If we can reach B by applying a finite number of row operations to A then we call the two matrices row equivalent. Note that this will also mean that we can reach A from B by applying the inverse operations in the reverse order. Linear Algebra © 2007 Paul Dawkins 58 Example 7 Consider 4 3 2 1 5 8 A − ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ then 4 3 2 14 1 22 B − ⎡ ⎤ = ⎢ ⎥ − − ⎣ ⎦ is row equivalent to A because we reached B by first multiplying row 2 of A by -2 and the adding 3 times row 1 onto row 2. For the practice let’s do these operations using elementary matrices. Here are the elementary matrices (and their inverses) for the operations on A. 1 2 1 1 1 2 1 2 1 2 2 1 0 1 0 2 : 0 0 2 1 0 1 0 3 : 3 1 3 1 R E E R R E E − − ⎡ ⎤ ⎡ ⎤ − = = ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ ⎡ ⎤ ⎡ ⎤ + = = ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ Now, to reach B Theorem 4 tells us that we need to multiply the left side of A by each of these in the same order as we applied the operations. 2 1 1 0 1 0 4 3 2 3 1 0 2 1 5 8 1 0 4 3 2 3 1 2 10 16 4 3 2 14 1 22 E E A B − ⎡ ⎤⎡ ⎤⎡ ⎤ = ⎢ ⎥⎢ ⎥⎢ ⎥ − − ⎣ ⎦⎣ ⎦⎣ ⎦ − ⎡ ⎤⎡ ⎤ = ⎢ ⎥⎢ ⎥ − − ⎣ ⎦⎣ ⎦ − ⎡ ⎤ = = ⎢ ⎥ − − ⎣ ⎦ Sure enough we get B as we should. Now, since A and B are row equivalent this means that we should be able to get to A from B by applying the inverse operations in the reverse order. Let’s see if that does in fact work. 1 1 1 2 1 2 1 2 1 0 1 0 4 3 2 0 3 1 14 1 22 1 0 4 3 2 0 2 10 16 4 3 2 1 5 8 E E B A − − − ⎡ ⎤⎡ ⎤⎡ ⎤ = ⎢ ⎥⎢ ⎥⎢ ⎥ − − − − ⎣ ⎦⎣ ⎦ ⎣ ⎦ − ⎡ ⎤⎡ ⎤ = ⎢ ⎥⎢ ⎥ − − − ⎣ ⎦ ⎣ ⎦ − ⎡ ⎤ = = ⎢ ⎥ − ⎣ ⎦ So, we sure enough end up with the correct matrix and again remember that each time we multiplied the left side by an elementary matrix Theorem 4 tells us that is the same thing as applying the associated row operation to the matrix. Linear Algebra © 2007 Paul Dawkins 59 Finding Inverse Matrices In the previous section we introduced the idea of inverse matrices and elementary matrices. In this section we need to devise a method for actually finding the inverse of a matrix and as we’ll see this method will, in some way, involve elementary matrices, or at least the row operations that they represent. The first thing that we’ll need to do is take care of a couple of theorems. Theorem 1 If A is an n n × matrix then the following statements are equivalent. (a) A is invertible. (b) The only solution to the system 0 A = x is the trivial solution. (c) A is row equivalent to n I . (d) A is expressible as a product of elementary matrices. Before we get into the proof let’s say a couple of words about just what this theorem tells us and how we go about proving something like this. First, when we have a set of statements and when we say that they are equivalent then what we’re really saying is that either they are all true or they are all false. In other words, if you know one of these statements is true about a matrix A then they are all true for that matrix. Likewise, if one of these statements is false for a matrix A then they are all false for that matrix. To prove a set of equivalent statements we need to prove a string of implications. This string has to be able to get from any one statement to any other through a finite number of steps. In this case we’ll prove the following chain ( ) ( ) ( ) ( ) ( ) a b c d a ⇒ ⇒ ⇒ ⇒ . By doing this if we know one of them to be true/false then we can follow this chain to get to any of they others. The actual proof will involve four parts, one for each implication. To prove a given implication we’ll assume the statement on the left is true and show that this must in some way also force the statement on the right to also be true. So, let’s get going. Proof : ( ) ( ) a b ⇒ : So we’ll assume that A is invertible and we need to show that this assumption also implies that 0 A = x will have only the trivial solution. That’s actually pretty easy to do. Since A is invertible we know that 1 A− exists. So, start by assuming that 0 x is any solution to the system, plug this into the system and then multiply (on the left) both sides by 1 A− to get, 1 1 0 0 0 0 0 0 A A A I − − = = = x x x So, 0 A = x has only the trivial solution and we’ve managed to prove this implication. ( ) ( ) b c ⇒ : Here we’re assuming that 0 A = x will have only the trivial solution and we’ll need to show that A is row equivalent to n I . Recall that two matrices are row equivalent if we can get from one to the other by applying a finite set of elementary row operations. Linear Algebra © 2007 Paul Dawkins 60 Let’s start off by writing down the augmented matrix for this system. 11 12 1 21 22 2 1 2 0 0 0 n n n n nn a a a a a a a a a ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ " " # # % # # " Now, if we were going to solve this we would use elementary row operations to reduce this to reduced row-echelon form, Now we know that the solution to this system must be, 1 2 0, 0, , 0 n x x x = = = … by assumption. Therefore, we also know what the reduced row-echelon form of the augmented matrix must be since that must give the above solution. The reduced-row echelon form of this augmented matrix must be, 1 0 0 0 0 1 0 0 0 0 1 0 ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ " " # # % # # " Now, the entries in the last column do not affect the values in the entries in the first n columns and so if we take the same set of elementary row operations and apply them to A we will get n I and so A is row equivalent to n I since we can get to n I by applying a finite set of row operations to A. Therefore this implication has been proven. ( ) ( ) c d ⇒ : In this case we’re going to assume that A is row equivalent to n I and we’ll need to show that A can be written as a product of elementary matrices. So, since A is row equivalent to n I we know there is a finite set of elementary row operations that we can apply to A that will give us n I . Let’s suppose that these row operations are represented by the elementary matrices 1 2 , , , k E E E … . Then by Theorem 4 of the previous section we know that applying each row operation to A is the same thing as multiplying the left side of A by each of the corresponding elementary matrices in the same order. So, we then know that we will have the following. 2 1 k n E E E A I = " Now, by Theorem 5 from the previous section we know that each of these elementary matrices is invertible and their inverses are also elementary matrices. So multiply the above equation (on the left) by 1 1 1 2 1 , , , k E E E − − − … (in that order) to get, 1 1 1 1 1 1 1 2 1 2 k n k A E E E I E E E − − − − − − = = " " So, we see that A is a product of elementary matrices and this implication is proven. ( ) ( ) d a ⇒ : Here we’ll be assuming that A is a product of elementary matrices and we need to show that A is invertible. This is probably the easiest implication to prove. Linear Algebra © 2007 Paul Dawkins 61 First, A is a product of elementary matrices. Now, by Theorem 5 from the previous section we know each of these elementary matrices is invertible and by Theorem 2(a) also from the previous section we know that a product of invertible matrices is also invertible. Therefore, A is invertible since it can be written as a product of invertible matrices and we’ve proven this implication. This theorem can actually be extended to include a couple more equivalent statements, but to do that we need another theorem. Theorem 2 Suppose that A is a square matrix then (a) If B is a square matrix such that BA I = then A is invertible and 1 A B −= . (b) If B is a square matrix such that AB I = then A is invertible and 1 A B −= . Proof : (a) This proof will need part (b) of Theorem 1. If we can show that 0 A = x has only the trivial solution then by Theorem 1 we will know that A is invertible. So, let 0 x be any solution to 0 A = x . Plug this into the equation and then multiply both sides (on the left by B. 0 0 0 0 0 0 0 0 A BA B I = = = = x x x x So, this shows that any solution to 0 A = x must be the trivial solution and so by Theorem 1 if one statement is true they all are and so A is invertible. We know from the previous section that inverses are unique and because BA I = we must then also have 1 A B −= . (b) In this case let’s let 0 x be any solution to 0 B = x . Then multiplying both sides (on the left) of this by A we can use a similar argument to that used in (a) to show that 0 x must be the trivial solution and so B is an invertible matrix and that in fact 1 B A −= . Now, this isn’t quite what we were asked to prove, but it does in fact give us the proof. Because B is invertible and its inverse is A (by the above work) we know that, AB BA I = = but this is exactly what it means for A to be invertible and that 1 A B −= . So, we are done. So, what’s the big deal with this theorem? We’ll recall in the last section that in order to show that a matrix, B, was the inverse of A we needed to show that AB BA I = = . In other words, we needed to show that both of these products were the identity matrix. Theorem 2 tells us that all we really need to do is show one of them and we get the other one for free. This theorem gives us is the ability to add two equivalent statements to Theorem 1. Here is the improved Theorem 1. Linear Algebra © 2007 Paul Dawkins 62 Theorem 3 If A is an n n × matrix then the following statements are equivalent. (a) A is invertible. (b) The only solution to the system 0 A = x is the trivial solution. (c) A is row equivalent to n I . (d) A is expressible as a product of elementary matrices. (e) A = x b has exactly one solution for every 1 n× matrix b. (f) A = x b is consistent for every 1 n× matrix b. Note that (e) and (f) appear to be the same on the surface, but recall that consistent only says that there is at least one solution. If a system is consistent there may be infinitely many solutions. What this part is telling us is that if the system is consistent for any choice of b that we choose to put into the system then we will in fact only get a single solution. If even one b gives infinitely many solutions the (f) is false, which in turn makes all the other statements false. Okay so how do we go about proving this? We’ve already proved that the first four statements are equivalent above so there’s no reason to redo that work. This means that all we need to do is prove that one of the original statements implies the new two new statements and these in turn imply one of the four original statements. We’ll do this by proving the following implications ( ) ( ) ( ) ( ) a e f a ⇒ ⇒ ⇒ . Proof : ( ) ( ) a e ⇒ : Okay with this implication we’ll assume that A is invertible and we’ll need to show that A = x b has exactly one solution for every 1 n× matrix b. This is actually very simple to do. Since A is invertible we know that 1 A− so we’ll do the following. 1 1 1 1 A A A I A A − − − − = = = x b x b x b So, if A is invertible we’ve shown that the solution to the system will be 1 A− = x b and since matrix multiplication is unique (i.e. we aren’t going to get two different answers from the multiplication) the solution must also be unique and so there is exactly one solution to the system. ( ) ( ) e f ⇒ : This implication is trivial. We’ll start off by assuming that the system A = x b has exactly one solution for every 1 n× matrix b but that also means that the system is consistent every 1 n× matrix b and so we’re done with the proof of this implication. ( ) ( ) f a ⇒ : Here we’ll start off by assuming that A = x b is consistent for every 1 n× matrix b and we’ll need to show that this implies A is invertible. So, if A = x b is consistent for every 1 n× matrix b it is consistent for the following n systems. Linear Algebra © 2007 Paul Dawkins 63 1 1 1 1 0 0 0 1 0 0 0 0 0 0 1 n n n A A A × × × ⎡⎤ ⎡⎤ ⎡⎤ ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ = = = ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎣⎦ ⎣⎦ ⎣⎦ x x x " # # # Since we know each of these systems have solutions let 1 2 , , , n x x x … be those solutions and form a new matrix, B, with these solutions as its columns. In other words, [ ] 1 2 n B = x x x " Now let’s take a look at the product AB. We know from the matrix arithmetic section that the ith column of AB will be given by i Ax and we know what each of these products will be since i x is a solution to one of the systems above. So, let’s use all this knowledge to see what the product AB is. [ ] 1 2 1 0 0 0 1 0 0 0 0 0 0 1 n AB A A A I ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ x x x " " " " # # # " So, we’ve shown that AB I = , but by Theorem 2 this means that A must be invertible and so we’re done with the proof. Before proceeding let’s notice that part (c) of this theorem is also telling us that if we reduced A down to reduced row-echelon form then we’d have n I . This can also be seen in the proof in Theorem 1 of the implication ( ) ( ) b c ⇒ . So, just how does this theorem help us to determine the inverse of a matrix? Well, first let’s assume that A is in fact invertible and so all the statements in Theorem 3 are true. Now, go back to the proof of the implication ( ) ( ) c d ⇒ . In this proof we saw that there were elementary matrices, 1 2 , , , k E E E … , so that we’d get the following, 2 1 k n E E E A I = " Since we know A is invertible we know that 1 A− exists and so multiply (on the right) each side of this to get, 1 1 1 2 1 2 1 k n k n E E E AA I A A E E E I − − − = ⇒ = " " What this tell us is that we need to find a series of row operation that will reduce A to n I and then apply the same set of operations to n I and the result will be the inverse, 1 A−. Linear Algebra © 2007 Paul Dawkins 64 Okay, all this is fine. We can write down a bunch of symbols to tell us how to find the inverse, but that doesn’t always help to actually find the inverse. The work above tells us that we need to identify a series of elementary row operations that will reduce A to n I and then apply those operations to n I . We’ll it turns out that we can do both of these steps simultaneously and we don’t need to mess around with the elementary matrices. Let’s start off by supposing that A is an invertible n n × matrix and then form the following new matrix. [ ] n A I Note that all we did here was tack on n I to the original matrix A. Now, if we apply a row operation to this it will be equivalent to applying it simultaneously to both A and to n I . So, all we need to do is find a series of row operations that will reduce the “A” portion of this to n I , making sure to apply the operations to the whole matrix. Once we’ve done this we will have, 1 n I A− ⎡ ⎤ ⎣ ⎦ provided A is in fact invertible of course. We’ll deal with singular matrices in a bit. Let’s take a look at a couple of examples. Example 1 Determine the inverse of the following matrix given that it is invertible. 4 2 5 5 A − − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ Solution Note that this is the 2 2 × we looked at in Example 1 of the previous section. In that example stated (and proved) that the inverse was, 1 1 2 5 1 1 2 2 5 A− − − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ We can now show how we arrived at this for the inverse. We’ll first form the new matrix 4 2 1 0 5 5 0 1 − − ⎡ ⎤ ⎢ ⎥ ⎣ ⎦ Next we’ll find row operations that will convert the first two columns into 2 I and the third and fourth columns should then contain 1 A−. Here is that work, 1 2 2 1 4 2 1 0 1 3 1 1 5 1 3 1 1 5 5 0 1 5 5 0 1 0 10 5 4 R R R R − − + − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → → − − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ 1 1 1 1 2 2 5 2 10 1 2 1 2 2 5 2 5 1 3 1 1 1 0 3 0 1 0 1 R R R − − − − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ → → ⎣ ⎦ ⎣ ⎦ So, the first two columns are in fact 2 I and in the third and fourth columns we’ve got the inverse, 1 1 2 5 1 1 2 2 5 A− − − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ Linear Algebra © 2007 Paul Dawkins 65 Example 2 Determine the inverse of the following matrix given that it is invertible. 3 1 0 1 2 2 5 0 1 C ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Solution Okay we’ll first form the new matrix, 3 1 0 1 0 0 1 2 2 0 1 0 5 0 1 0 0 1 ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ and we’ll use elementary row operations to reduce the first three rows to 3 I and then the last three rows will be the inverse of C. Here is that work. 1 2 3 1 0 1 0 0 1 5 4 1 2 0 2 1 2 2 0 1 0 1 2 2 0 1 0 5 0 1 0 0 1 5 0 1 0 0 1 R R ⎡ ⎤ ⎡ ⎤ + ⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ 2 1 1 2 7 6 3 1 3 1 7 7 7 1 5 4 1 2 0 1 5 4 1 2 0 5 0 7 6 1 3 0 0 1 0 0 25 21 5 10 1 0 25 21 5 10 1 R R R R R + ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ → − − − − − − − − ⎣ ⎦ ⎣ ⎦ 7 3 2 3 3 6 3 6 3 1 1 7 7 7 7 7 7 3 10 5 10 5 7 7 7 7 3 3 3 1 5 4 1 2 0 1 5 4 1 2 0 25 0 1 0 0 1 0 0 0 1 0 0 1 R R R ⎡ ⎤ ⎡ ⎤ + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → → ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ 6 43 28 14 2 1 2 2 3 7 3 3 3 3 3 3 1 2 1 3 10 5 7 10 5 7 3 3 3 3 3 3 1 5 0 1 0 0 5 4 0 1 0 3 1 2 0 1 0 3 1 2 0 0 1 0 0 1 R R R R R R − − − − ⎡ ⎤ ⎡ ⎤ − ⎢ ⎥ ⎢ ⎥ − − − − − ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ → − − ⎣ ⎦ ⎣ ⎦ So, we’ve gotten the first three columns reduced to 3 I and that means the last three must be the inverse. 2 1 2 3 3 3 1 10 5 7 3 3 3 3 1 2 C − − ⎡ ⎤ ⎢ ⎥ = − − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ We’ll leave it to you to verify that 1 1 3 CC C C I − − = = . Okay, so far we’ve seen how to use the method above to determine an inverse, but what happens if a matrix doesn’t have an inverse? We’ll it turns out that we can also use this method to determine that as well and it generally doesn’t take quite as much work as it does to actually find the inverse (if it exists of course….). Let’s take a look at an example of that. Linear Algebra © 2007 Paul Dawkins 66 Example 3 Show that the following matrix does not have an inverse, i.e. show the matrix is singular. 3 3 6 0 1 2 2 0 0 B ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Solution Okay, the problem statement says that the matrix is singular, but let’s pretend that we didn’t know that and work the problem as we did in the previous two examples. That means we’ll need the new matrix, 3 3 6 1 0 0 0 1 2 0 1 0 2 0 0 0 0 1 ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Now, let’s get started on getting the first three columns reduced to 3 I . 1 2 3 3 6 1 0 0 1 3 6 1 0 1 0 1 2 0 1 0 0 1 2 0 1 0 2 0 0 0 0 1 2 0 0 0 0 1 R R ⎡ ⎤ ⎡ ⎤ + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ 2 3 3 2 1 3 6 1 0 1 1 3 6 1 0 1 2 6 0 1 2 0 1 0 0 1 2 0 1 0 0 6 12 2 0 3 0 2 0 3 0 6 R R R R ⎡ ⎤ ⎡ ⎤ + − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → → ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ At this point let’s stop and examine the third row in a little more detail. In order for the first three columns to be 3 I the first three entries of the last row MUST be [ ] 0 0 1 which we clearly don’t have. We could use a multiple of row 1 or row 2 to get a 1 in the third spot, but that would in turn change at least one of the first two entries away from 0. That’s a problem since they must remain zeroes. In other words, there is one way to make the third entry in the third row a 1 without also changing one or both of the first two entries into something other than zero and so we will never be able to make the first three columns into 3 I . So, there are no sets of row operations that will reduce B to 3 I and hence B is NOT row equivalent to 3 I . Now, go back to Theorem 3. This was a set of equivalent statements and if one is false they are all false. We’ve just managed to show that part (c) is false and that means that part (a) must also be false. Therefore, B must be a singular matrix. The idea used in this last example to show that B was singular can be used in general. If, in the course of reducing the new matrix, we ever end up with a row in which all the entries to the left of the dashed line are zeroes we will know that the matrix must be singular. We’ll leave this section off with a quick formula that can always be used to find the inverse of an invertible 2 2 × matrix as well as a way to quickly determine if the matrix is invertible. The Linear Algebra © 2007 Paul Dawkins 67 above method is nice in that it always works, but it can be cumbersome to use so the following formula can help to make things go quicker for 2 2 × matrices. Theorem 4 The matrix a b A c d ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ will be invertible if 0 ad bc − ≠ and singular if 0 ad bc − = . If the matrix is invertible its inverse will be, 1 1 d b A c a ad bc − − ⎡ ⎤ = ⎢ ⎥ − − ⎣ ⎦ Let’s do a quick example or two of this fact. Example 4 Use the fact to show that 4 2 5 5 A − − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ is an invertible matrix and find its inverse. Solution We’ve already looked at this one above, but let’s do it here so we can contrast the work between the two methods. First, we need, ( )( ) ( )( ) 4 5 5 2 10 0 ad bc − = − − − = − ≠ So, the matrix is in fact invertible by the fact and here is the inverse, 1 1 2 5 1 1 2 2 5 5 2 1 5 4 10 A− − − ⎡ ⎤ ⎡ ⎤ = = ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ ⎣ ⎦ Example 5 Determine if the following matrix is singular. 4 2 6 3 B − − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ Solution Not much to do with this one. ( )( ) ( )( ) 4 3 2 6 0 − −− = So, by the fact the matrix is singular. If you’d like to see a couple more example of finding inverses check out the section on Special Matrices, there are a couple more examples there. Linear Algebra © 2007 Paul Dawkins 68 Special Matrices This section is devoted to a couple of special matrices that we could have talked about pretty much anywhere, but due to the desire to keep most of these sections as small as possible they just didn’t fit in anywhere. However, we’ll need a couple of these in the next section and so we now need to get them out of the way. Diagonal Matrix This first one that we’re going to take a look at is a diagonal matrix. A square matrix is called diagonal if it has the following form. 1 2 3 0 0 0 0 0 0 0 0 0 0 0 0 n n n d d D d d × ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ " " " # # # % # " In other words, in a diagonal matrix is any matrix in which the only potentially non-zero entries are one the main diagonal. Any entry off the main diagonal must be zero and note that it is possible to have one or more of the main diagonal entries be zero. We’ve also been dealing with a diagonal matrix already to this point if you think about it a little. The identity matrix is a diagonal matrix. Here is a nice theorem about diagonal matrices. Theorem 1 Suppose D is a diagonal matrix and 1 2 , , n d d d … are the entries on the main diagonal. If one or more of the i d ’s are zero then the matrix is singular. On the other hand if 0 i d ≠ for all i then the matrix is invertible and the inverse is, 1 2 1 3 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 n d d D d d − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ " " " # # # % # " Proof : First, recall Theorem 3 from the previous section. This theorem tells us that if D is row equivalent to the identity matrix then D is invertible and if D is not row equivalent to the identity then D is singular. Linear Algebra © 2007 Paul Dawkins 69 If none of the i d ’s are zero then we can reduce D to the identity simply dividing each of the rows its diagonal entry (which we can do since we’ve assumed none of them are zero) and so in this case D will be row equivalent to the identity. Therefore, in this case D is invertible. We’ll leave it to you to verify that the inverse is what we claim it to be. You can either compute this directly using the method from the previous section or you can verify that 1 1 DD D D I − − = = . Now, suppose that at least one of the i d is equal to zero. In this case we will have a row of all zeroes, and because D is a diagonal matrix all the entries above the main diagonal entry in this row will also be zero and so there is no way for us to use elementary row operations to put a 1 into the main diagonal and so in this case D will not be row equivalent to the identity and hence must be singular. Powers of diagonal matrices are also easy to compute. If D is a diagonal matrix and k is any integer then 1 2 3 0 0 0 0 0 0 0 0 0 0 0 0 k k k k k n d d D d d ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ " " " # # # % # " Triangular Matrix The next kind of matrix we want to take a look at will be triangular matrices. In fact there are actually two kinds of triangular matrix. For an upper triangular matrix the matrix must be square and all the entries below the main diagonal are zero and the main diagonal entries and the entries above it may or may not be zero. A lower triangular matrix is just the opposite. The matrix is still a square matrix and all the entries of a lower triangular matrix above the main diagonal are zero and the main diagonal entries and those below it may or may not be zero. Here are the general forms of an upper and lower triangular matrix. 11 12 13 1 22 23 2 33 3 0 0 0 Upper Triangular 0 0 0 n n n nn n n u u u u u u u u u U u × ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ " " " # # # % # " 11 21 22 31 32 33 1 2 3 0 0 0 0 0 0 Lower Triangular n n n nn n n l l l l l l L l l l l × ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ " " " # # # % # " In these forms the i j u and i j l may or may not be zero. Linear Algebra © 2007 Paul Dawkins 70 If we do not care if the matrix is upper or lower triangular we will generally just call it triangular. Note as well that a diagonal matrix can be thought of as both an upper triangular matrix and a lower triangular matrix. Here’s a nice theorem about the invertibility of a triangular matrix. Theorem 2 If A is a triangular matrix with main diagonal entries 11 22 , , , nn a a a … then if one or more of the ii a ’s are zero the matrix will be singular. On the other hand if 0 ii a ≠ for all i then the matrix is invertible. Here is the outline of the proof. Proof Outline : First assume that 0 ii a ≠ for all i. In this case we can divide each row by ii a (since it’s not zero) and that will put a 1 in the main diagonal entry for each row. Now use the third row operation to eliminate all the non-zero entries above the main diagonal entry for an upper triangular matrix or below it for a lower triangular matrix. When done with these operations we will have reduced A to the identity matrix. Therefore, in this case A is row equivalent to the identity and so must be invertible. Now assume that at least one of the ii a are zero. In this case we can’t get a 1 in the main diagonal entry just be dividing by ii a as we did in the first place. Now, for a second let’s suppose we have an upper triangular matrix. In this case we could use the third row operation using one of the rows above this to get a 1 into the main diagonal entry, however, this will also put non-zero entries into the entries to the left of this as well. In other words, we’re not going to be able to reduce A to the identity matrix. The same type of problem will arise if we’ve got a lower triangular matrix. In this case, A will not be row equivalent to the identity and so will be singular. Here is another set of theorems about triangular matrices that we aren’t going to prove. Theorem 3 (a) The product of lower triangular matrices will be a lower triangular matrix. (b) The product of upper triangular matrices will be an upper triangular matrix. (c) The inverse of an invertible lower triangular matrix will be a lower triangular matrix. (d) The inverse of an invertible upper triangular matrix will be an upper triangular matrix. The proof of these will pretty much follow from how products and inverses are found and so well be left to you to verify. The final kind of matrix that we want to look at in this section is that of a symmetric matrix. In fact we’ve already seen these in a previous section we just didn’t have the space to investigate them in more detail in that section so we’re going to do it here. Linear Algebra © 2007 Paul Dawkins 71 For completeness sake we’ll give the definition here again. Suppose that A is an n m × matrix, then A will be called symmetric if T A A = . Note that the first requirement for a matrix to be symmetric is that the matrix must be square. Since the size of T A will be m n × there is no way A and T A can be equal if A is not square since they won’t have the same size. Example 1 The following matrices are all symmetric. [ ] 6 10 3 0 4 6 10 0 1 4 10 6 7 3 1 12 8 0 4 8 5 A B C − ⎡ ⎤ ⎢ ⎥ − − ⎡ ⎤ ⎢ ⎥ = = = ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎢ ⎥ − ⎣ ⎦ We’ll leave it to you to compute the transposes of each of these and verity that they are in fact symmetric. Notice with the second matrix (B) above that you can always quickly identify a symmetric matrix by looking at the diagonals off the main diagonal. The diagonals right above and below the main diagonal consists of the entries -10, 1, 8 are identical. Likewise, the diagonals two above and below the main diagonal consists of the entries 3, -4 and again are identical. Finally, the “diagonals” that are three above and below the main diagonal is identical as well. This idea we see in the second matrix above will be true in any symmetric matrix. Here is a nice set of facts about arithmetic with symmetric matrices. Theorem 4 If A and B are symmetric matrices of the same size and c is any scalar then, (a) A B ± is symmetric. (b) cA is symmetric. (c) T A is symmetric. Note that the product of two symmetric matrices is probably not symmetric. To see why this is consider the following. Suppose both A and B are symmetric matrices of the same size then, ( ) T T T AB B A BA = = Notice that we used one of the properties of transposes we found earlier in the first step and the fact that A and B are symmetric in the last step. So what this tells us is that unless A and B commute we won’t have ( ) T AB AB = and the product won’t be symmetric. If A and B do commute then the product will be symmetric. Now, if A is any n m × matrix then because T A will have size m n × both T AA and T A A will be defined and in fact will be square matrices where T AA has size n n × and T A A has size m m × . Here are a couple of quick facts about symmetric matrices. Linear Algebra © 2007 Paul Dawkins 72 Theorem 5 (a) For any matrix A both T AA and T A A are symmetric. (b) If A is an invertible symmetric matrix then 1 A− is symmetric. (c) If A is invertible then T AA and T A A are both invertible. Proof : (a) We’ll show that T AA is symmetric and leave the other to you to verify. To show that T AA is symmetric we’ll need to show that ( ) T T T AA AA = . This is actually quite simple if we recall the various properties of transpose matrices that we’ve got. ( ) ( ) ( ) T T T T T T T AA A A A A AA = = = (b) In this case all we need is a theorem from a previous section to show that ( ) 1 1 T A A − − = . Here is the work, ( ) ( ) ( ) 1 1 1 1 T T A A A A − − − − = = = (c) If A is invertible then we also know that T A is invertible and we since the product of invertible matrices is invertible both T AA and T A A are invertible. Let’s finish this section with an example or two illustrating the results of some of the theorems above. Example 2 Given the following matrices compute the indicated quantities. 4 2 1 2 0 3 3 0 0 0 9 6 0 7 1 0 2 0 0 0 1 0 0 5 9 5 4 2 0 4 1 1 2 0 1 0 1 6 2 3 1 8 2 1 1 0 1 0 A B C D E − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = − = − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ − − − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = − = − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ (a) AB [Solution] (b) 1 C − [Solution] (c) T D D [Solution] (d) 1 E− [Solution] Solution (a) AB There really isn’t much to do here other than the multiplication and we’ll leave it to you to verify the actual multiplication. 8 14 19 0 63 39 0 0 5 A − − ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ So, as suggested by Theorem 3 the product of upper triangular matrices is in fact an upper triangular matrix. Linear Algebra © 2007 Paul Dawkins 73 [Return to Problems] (b) 1 C − Here’s the work for finding 1 C −. 1 1 1 3 3 1 1 2 2 2 3 0 0 1 0 0 1 0 0 0 0 0 2 0 0 1 0 0 1 0 0 0 9 5 4 0 0 1 9 5 4 0 0 1 R R ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥→ ⎣ ⎦ ⎣ ⎦ 1 1 3 3 3 1 3 2 1 1 2 2 5 2 1 0 0 0 0 1 0 0 0 0 9 5 0 1 0 0 0 0 1 0 0 0 0 5 4 3 0 1 0 0 4 3 1 R R R R ⎡ ⎤ ⎡ ⎤ − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → → ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ ⎣ ⎦ 1 1 3 3 1 3 1 4 1 1 2 2 3 5 3 5 1 1 4 8 4 4 8 4 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 R C − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⇒ = ⎢ ⎥ ⎢ ⎥ →⎢ ⎥ ⎢ ⎥ − − − − ⎣ ⎦ ⎣ ⎦ So, again as suggested by Theorem 3 the inverse of a lower triangular matrix is also a lower triangular matrix. [Return to Problems] (c) T D D Here’s the transpose and the product. 2 1 8 0 0 2 4 1 1 1 6 1 T D − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ − − ⎢ ⎥ − ⎣ ⎦ 2 1 8 69 16 15 4 2 0 4 1 0 0 2 16 4 2 2 1 0 1 6 4 1 1 15 2 18 11 8 2 1 1 1 6 1 4 2 11 38 T D D − − ⎡ ⎤ ⎡ ⎤ − − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ − − − − ⎣ ⎦ ⎣ ⎦ So, as suggested by Theorem 5 this product is symmetric even though D was not symmetric (or square for that matter). [Return to Problems] (d) 1 E− Here is the work for finding 1 E−. 2 1 1 2 0 1 0 0 1 2 0 1 0 0 2 2 3 1 0 1 0 0 1 1 2 1 0 0 1 0 0 0 1 0 1 0 0 0 1 R R E − − ⎡ ⎤ ⎡ ⎤ + ⎢ ⎥ ⎢ ⎥ = − − ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ Linear Algebra © 2007 Paul Dawkins 74 1 2 2 3 3 1 1 2 0 1 0 0 2 1 0 0 1 0 2 0 1 0 0 0 1 0 1 0 0 0 1 0 1 1 2 1 0 0 0 1 2 1 1 R R R R R R − + ⎡ ⎤ ⎡ ⎤ ↔ ⎢ ⎥ ⎢ ⎥ + ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ − → ⎣ ⎦ ⎣ ⎦ So, the inverse is 1 1 0 2 0 0 1 2 1 1 E − ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ and as suggested by Theorem 5 the inverse is symmetric. [Return to Problems] Linear Algebra © 2007 Paul Dawkins 75 LU­Decomposition In this section we’re going to discuss a method for factoring a square matrix A into a product of a lower triangular matrix, L, and an upper triangular matrix, U. Such a factorization can be used to solve systems of equations as we’ll see in the next section when we revisit that topic. Let’s start the section out with a definition and a theorem. Definition 1 If A is a square matrix and it can be factored as A LU = where L is a lower triangular matrix and U is an upper triangular matrix, then we say that A has an LU-Decomposition of LU. Theorem 1 If A is a square matrix and it can be reduced to a row-echelon form, U, without interchanging any rows then A can be factored as A LU = where L is a lower triangular matrix. We’re not going to prove this theorem but let’s examine it in some detail and we’ll find a way to determine a way of determining L. Let’s start off by assuming that we’ve got a square matrix A and that we are able to reduce it row-echelon form U without interchanging any rows. We know that each row operation that we used has a corresponding elementary matrix, so let’s suppose that the elementary matrices corresponding to the row operations we used are 1 2 , , , k E E E … . We know from Theorem 4 in a previous section that multiplying these to the left side of A in the same order we applied the row operations will be the same as actually applying the operations. So, this means that we’ve got, 2 1 k E E E A U = " We also know that elementary matrices are invertible so let’s multiply each side by the inverses, 1 1 1 2 1 , , , k E E E − − − … , in that order to get, 1 1 1 1 2 k A E E E U − − − = " Now, it can be shown that provided we avoid interchanging rows the elementary row operations that we needed to reduce A to U will all have corresponding elementary matrices that are lower triangular matrices. We also know from the previous section that inverses of lower triangular matrices are lower triangular matrices and products of lower triangular matrices are lower triangular matrices. In other words, 1 1 1 1 2 k L E E E − − − = " is a lower triangular matrix and so using this we get the LU-Decomposition for A of A LU = . Let’s take a look at an example of this. Example 1 Determine an LU-Decomposition for the following matrix. 3 6 9 2 5 3 4 1 10 A − ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Solution So, first let’s go through the row operations to get this into row-echelon form and remember that we aren’t allowed to do any interchanging of rows. Also, we’ll do this step by step so that we can Linear Algebra © 2007 Paul Dawkins 76 keep track of the row operations that we used since we’re going to need to write down the elementary matrices that are associated with them eventually. 1 1 3 3 6 9 1 2 3 2 5 3 2 5 3 4 1 10 4 1 10 R − − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ 2 1 1 2 3 1 2 3 2 2 5 3 0 1 3 4 1 10 4 1 10 R R − − ⎡ ⎤ ⎡ ⎤ − ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ 3 1 1 2 3 1 2 3 4 0 1 3 0 1 3 4 1 10 0 9 2 R R − − ⎡ ⎤ ⎡ ⎤ + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ 3 2 1 2 3 1 2 3 9 0 1 3 0 1 3 0 9 2 0 0 29 R R − − ⎡ ⎤ ⎡ ⎤ − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ 1 3 29 1 2 3 1 2 3 0 1 3 0 1 3 0 0 29 0 0 1 R − − ⎡ ⎤ ⎡ ⎤ − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ Okay so, we’ve got our hands on U. 1 2 3 0 1 3 0 0 1 U − ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Now we need to get L. This is going to take a little more work. We’ll need the elementary matrices for each of these, or more precisely their inverses. Recall that we can get the elementary matrix for a particular row operation by applying that operation to the appropriately sized identity matrix (3 3 × in this case). Also recall that the inverse matrix can be found by applying the inverse operation to the identity matrix. Here are the elementary matrices and their inverses for each of the operations above. 1 3 1 1 1 1 0 0 3 0 0 1 0 1 0 0 1 0 3 0 0 1 0 0 1 R E E− ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ 1 2 1 2 2 1 0 0 1 0 0 2 2 1 0 2 1 0 0 0 1 0 0 1 R R E E− ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − = − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ 1 3 1 3 3 1 0 0 1 0 0 4 0 1 0 0 1 0 4 0 1 4 0 1 R R E E− ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ + = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ Linear Algebra © 2007 Paul Dawkins 77 1 3 2 4 4 1 0 0 1 0 0 9 0 1 0 0 1 0 0 9 1 0 9 1 R R E E− ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ 1 3 5 5 1 29 0 0 0 1 0 0 1 0 1 0 0 1 0 29 0 0 0 0 29 R E E− ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ Okay, we know can compute L. 1 1 1 1 1 1 2 3 4 5 3 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 2 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 4 0 1 0 9 1 0 0 29 3 0 0 2 1 0 4 9 29 L E E E E E − − − − − = ⎡ ⎤⎡ ⎤⎡ ⎤⎡ ⎤⎡ ⎤ ⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥ = ⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥ − − ⎣ ⎦⎣ ⎦⎣ ⎦⎣ ⎦⎣ ⎦ ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ Finally, we can verify that we’ve gotten an LU-Decomposition with a quick computation. 3 0 0 1 2 3 3 6 9 2 1 0 0 1 3 2 5 3 4 9 29 0 0 1 4 1 10 A − − ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = − = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦⎣ ⎦ ⎣ ⎦ So we did all the work correctly. That was a lot of work to determine L. There is an easier way to do it however. Let’s start off with a general L with “” in place of the potentially non-zero terms. 0 0 0 L ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Let’s start with the main diagonal and go back and look at the operations that was required to get 1’s on the diagonal when we were computing U. To get a 1 in the first row we had to multiply that row by 1 3 . We didn’t need to do anything to get a 1 in the second row, but for the sake argument let’s say that we actually multiplied that row by 1. Finally, we multiplied the third row by 1 29 − to get a 1 in the main diagonal entry in that row. Next go back and look at the L that we had for this matrix. The main diagonal entries are 3, 1, and -29. In other words, they are the reciprocal of the numbers we used in computing U. This will always be the case. The main diagonal of L then using this idea is, Linear Algebra © 2007 Paul Dawkins 78 3 0 0 1 0 2 9 L ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Now, let’s take a look at the two entries under the 3 in the first column. Again go back to the operations used to find U and take a look at the operations we used to get zeroes in these two spots. To get a zero in the second row we added 1 2R − onto 2 R and to get a zero in the third row we added 1 4R onto 3 R . Again, go back to the L we found and notice that these two entries are 2 and -4. Or, they are the negative of the multiple of the first row that we added onto that particular row to get that entry to be zero. Filling these in we now arrive at, 3 0 0 2 1 0 4 29 L ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ Finally, in determining U we 2 9R − onto 3 R to get the entry in the third row and second column to be zero and in the L we found this entry is 9. Again, it’s the negative of the multiple of the second row we used to make this entry zero. This gives us the final entry in L. 3 0 0 2 1 0 4 9 29 L ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ This process we just went through will always work in determining L for our LU-Decomposition provided we follow the process above to find U. In fact that is the one drawback to this process. We need to find U using exactly the same steps we used in this example. In other words, multiply/divide the first row by an appropriate scalar to get a 1 in the first column then zero out the entries below that one. Next, multiply/divide the second row by an appropriate scalar to get a 1 in the main diagonal entry of the second row and then zero out all the entries below this. Continue in this fashion until you’ve dealt with all the columns. This will sometimes lead to some messy fractions. Let’s take a look at another example and this time we’ll use the procedure outlined above to find L instead of dealing with all the elementary matrices. Example 2 Determine an LU-Decomposition for the following matrix. 2 3 4 5 4 4 1 7 0 B − ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Solution So, we first need to reduce B to row-echelon form without using row interchanges. Also, if we’re going to use the process outlined above to find L we’ll need to do the reduction in the same manner as the first example. Here is that work. Linear Algebra © 2007 Paul Dawkins 79 3 3 2 1 2 2 1 1 2 7 3 1 2 17 2 2 3 4 1 2 5 1 2 5 4 4 5 4 4 0 14 1 7 0 1 7 0 0 2 R R R R R − − − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ + − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥→⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − → − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ 3 3 3 2 2 2 17 2 1 2 3 3 2 7 32 2 17 2 1 2 1 2 1 2 0 1 4 0 1 4 0 1 4 0 2 0 0 32 0 0 1 R R R R − − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → → → ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ So, U is, 3 2 1 2 0 1 4 0 0 1 U − ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Now, let’s get L. Again, we’ll start with a general L and the main diagonal entries will be the reciprocal of the scalars we needed to multiply each row by to get a one in the main diagonal entry. This gives, 7 2 2 0 0 0 32 L ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Now, for the remaining entries, go back to the process and look for the multiple that was needed to get a zero in that spot and this entry will be the negative of that multiple. This gives us our final L. 7 2 17 2 2 0 0 5 0 1 32 L ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ As a final check we can always do a quick multiplication to verify that we do in fact get B from this factorization. 3 2 7 2 17 2 2 0 0 1 2 2 3 4 5 0 0 1 4 5 4 4 1 32 0 0 1 1 7 0 B − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦⎣ ⎦ So, it looks like we did all the work correctly. We’ll leave this section by pointing out a couple of facts about LU-Decompositions. First, given a random square matrix, A, the only way we can guarantee that A will have an LU-Decomposition is if we can reduce it to row-echelon form without interchanging any rows. If we do have to interchange rows then there is a good chance that the matrix will NOT have an LU-Decomposition. Linear Algebra © 2007 Paul Dawkins 80 Second, notice that every time we’ve talked about an LU-Decomposition of a matrix we’ve used the word “an” and not “the” LU-Decomposition. This choice of words is intentional. As the choice suggests there is no single unique LU-Decomposition for A. To see that LU-Decompositions are not unique go back to the first example. In that example we computed the following LU-Decomposition. 3 6 9 3 0 0 1 2 3 2 5 3 2 1 0 0 1 3 4 1 10 4 9 29 0 0 1 − − ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎢ ⎥ ⎢ ⎥⎢ ⎥ − = ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ − − − ⎣ ⎦ ⎣ ⎦⎣ ⎦ However, we’ve also got the following LU-Decomposition. 2 3 4 3 3 6 9 1 0 0 3 6 9 2 5 3 1 0 0 1 3 4 1 10 9 1 0 0 29 − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ This is clearly an LU-Decomposition since the first matrix is lower triangular and the second is upper triangular and you should verify that upon multiplying they do in fact give the shown matrix. If you would like to see a further example of an LU-Decomposition worked out there is an example in the next section. Linear Algebra © 2007 Paul Dawkins 81 Systems Revisited We opened up this chapter talking about systems of equations and we spent a couple of sections on them and then we moved away from them and haven’t really talked much about them since. It’s time to come back to systems and see how some of the ideas we’ve been talking about since then can be used to help us solve systems. We’ll also take a quick look at a couple of other ideas about systems that we didn’t look at earlier. First let’s recall that any system of n equations and m unknowns, 11 1 12 2 1 1 21 1 22 2 2 2 1 1 2 2 m m m m n n nm m n a x a x a x b a x a x a x b a x a x a x b + + + = + + + = + + + = " " # " can be written in matrix form as follows. 11 12 1 1 1 21 22 2 2 2 1 2 m m n n nm m n a a a x b a a a x b a a a x b A ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ = x b " " # # # # # " In the matrix form A is called the coefficient matrix and each row contains the coefficients of the corresponding equations, x is a column matrix that contains all the unknowns from the system of equations and finally b is a column matrix containing the constants on the right of the equal sign. Now, let’s see how inverses can be used to solve systems. First, we’ll need to assume that the coefficient matrix is a square n n × matrix. In other words there are the same number of equations as unknowns in our system. Let’s also assume that A is invertible. In this case we actually saw in the proof of Theorem 3 in the section on finding inverses that the solution to A = x b is unique (i.e. only a single solution exists) and that it’s given by, 1 A− = x b So, if we’ve got the inverse of the coefficient matrix in hand (not always an easy thing to find of course…) we can get the solution based on a quick matrix multiplication. Let’s see an example of this. Example 1 Use the inverse of the coefficient matrix to solve the following system. 1 2 1 2 3 1 3 3 6 2 2 7 5 10 x x x x x x x + = − + + = − − = Solution Okay, let’s first write down the matrix form of this system. Linear Algebra © 2007 Paul Dawkins 82 1 2 3 3 1 0 6 1 2 2 7 5 0 1 10 x x x ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − = − ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − ⎣ ⎦⎣ ⎦ ⎣ ⎦ Now, we found the inverse of the coefficient matrix back in Example 2 of the Finding Inverses section so here is the coefficient matrix and its inverse. 2 1 2 3 3 3 1 10 5 7 3 3 3 3 1 0 1 2 2 3 1 2 5 0 1 A A− − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = − = − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ The solution to the system in matrix form is then, 2 1 2 1 3 3 3 3 1 10 5 7 25 3 3 3 3 6 3 1 2 7 5 10 A− − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = = − − − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ x b Now since each of the entries of x are one of the unknowns in the original system above the system to the original system is then, 1 2 3 1 25 5 3 3 x x x = = = − So, provided we have a square coefficient matrix that is invertible and we just happen to have our hands on the inverse of the coefficient matrix we can find the solution to the system fairly easily. Next, let’s look at how the topic of the previous section (LU-Decompositions) can be used to solve systems of equations. First let’s recall how LU-Decompositions work. If we have a square matrix, A, (so we’ll again be working the same number of equations as unknowns) then if we can reduce it to row-echelon form without using any row interchanges then we can write it as A LU = where L is a lower triangular matrix and U is an upper triangular matrix. So, let’s start with a system A = x b where the coefficient matrix, A, is an n n × square and has an LU-Decomposition of A LU = . Now, substitute this into the system for A to get, LU = x b Next, let’s just take a look at Ux. This will be an 1 n× column matrix and let’s call it y. So, we’ve got U = x y . So, just what does this do for us? Well let’s write the system in the following manner. where L U = = y b x y As we’ll see it’s very easy to solve L = y b for y and once we know y it will be very easy to solve U = x y for x which will be the solution to the original system. It’s probably easiest to see how this method works with an example so let’s work one. Linear Algebra © 2007 Paul Dawkins 83 Example 2 Use the LU-Decomposition method to find the solution to the following system of equations. 1 2 3 1 2 3 1 2 3 3 6 9 0 2 5 3 4 4 10 3 x x x x x x x x x + − = + − = − − + + = Solution First let’s write down the matrix form of the system. 1 2 3 3 6 9 0 2 5 3 4 4 1 10 3 x x x − ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − = − ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − ⎣ ⎦⎣ ⎦ ⎣ ⎦ Now, we found an LU-Decomposition to this coefficient matrix in Example 1 of the previous section. From that example we see that, 3 6 9 3 0 0 1 2 3 2 5 3 2 1 0 0 1 3 4 1 10 4 9 29 0 0 1 − − ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎢ ⎥ ⎢ ⎥⎢ ⎥ − = ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ − − − ⎣ ⎦ ⎣ ⎦⎣ ⎦ According to the method outlined above this means that we actually need to solve the following two systems. 1 1 1 2 2 2 3 3 3 3 0 0 0 1 2 3 2 1 0 4 0 1 3 4 9 29 3 0 0 1 y x y y x y y x y − ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = − = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦ in order. So, let’s get started on the first one. Notice that we don’t really need to do anything other than write down the equations that are associated with this system and solve using forward substitution. The first equation will give us 1 y for free and once we know that the second equation will give us 2 y . Finally, with these two values in hand the third equation will give us 3 y . Here is that work. 1 1 1 2 2 1 2 3 3 3 0 0 2 4 4 39 4 9 29 3 29 y y y y y y y y y = ⇒ = + = − ⇒ = − − + − = ⇒ = − The second system that we need to solve is then, 1 2 39 3 29 1 2 3 0 0 1 3 4 0 0 1 x x x − ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎢ ⎥ ⎢ ⎥⎢ ⎥= − ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ − ⎣ ⎦⎣ ⎦ ⎣ ⎦ Again, notice that to solve this all we need to do is write down the equations and do back substitution. The third equation will give us 3 x for free and plugging this into the second Linear Algebra © 2007 Paul Dawkins 84 equation will give us 2 x , etc. Here’s the work for this. 1 2 3 1 2 3 2 3 3 119 2 3 0 29 1 3 4 29 39 39 29 29 x x x x x x x x x + − = ⇒ = − + = − ⇒ = = − ⇒ = − The solution to the original system is then shown above. Notice that while the final answers where a little messy the work was nothing more than a little arithmetic and wasn’t terribly difficult. Let’s work one more of these since there’s a little more work involved in this than the inverse matrix method of solving a system. Example 3 Use the LU-Decomposition method to find a solution to the following system of equations. 1 2 3 1 2 3 2 3 2 4 3 1 3 2 17 4 3 9 x x x x x x x x − + − = − − + = − + = − Solution Once again, let’s first get the matrix form of the system. 1 2 3 2 4 3 1 3 2 1 17 0 4 3 9 x x x − − − ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦⎣ ⎦ ⎣ ⎦ Now let’s get an LU-Decomposition for the coefficient matrix. Here’s the work that will reduce it to row-echelon form. Remember that the result of this will be U. 3 3 2 2 1 2 1 1 2 7 2 2 4 3 1 2 1 2 3 3 2 1 3 2 1 0 4 0 4 3 0 4 3 0 4 3 R R R − − − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → →⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ 3 3 3 2 2 2 1 3 2 3 2 4 7 7 7 8 8 8 1 2 1 2 1 2 1 2 4 2 0 1 0 1 0 1 0 4 3 0 0 0 0 1 R R R R − − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ + − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ → → →⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ So, U is then, 3 2 7 8 1 2 0 1 0 0 1 U − ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Now, to get L remember that we start off with a general lower triangular matrix and on the main diagonals we put the reciprocal of the scalar used in the work above to get a one in that spot. Then, in the entries below the main diagonal we put the negative of the multiple used to get a zero Linear Algebra © 2007 Paul Dawkins 85 in that spot above. L is then, 1 2 2 0 0 3 4 0 0 4 L − ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ We’ll leave it to you to verify that A LU = . Now let’s solve the system. This will mean we need to solve the following two systems. 3 1 1 1 2 7 2 2 2 8 1 3 3 3 2 2 0 0 1 1 2 3 4 0 17 0 1 0 4 9 0 0 1 y x y y x y y x y − − − ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = − = ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ Here’s the work for the first system. 1 1 1 2 2 2 3 3 1 2 1 2 31 3 4 17 8 1 4 9 13 2 y y y y y y y y − = − ⇒ = + = ⇒ = − − = − ⇒ = − Now let’s get the actual solution by solving the second system. 3 1 1 2 2 7 31 2 8 8 3 1 2 0 1 0 0 1 13 x x x − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ Here is the substitution work for this system. 1 2 3 1 2 3 2 3 3 3 1 2 5 2 2 7 31 15 8 8 2 13 13 x x x x x x x x x − + = ⇒ = − = ⇒ = − = − ⇒ = − So there’s the solution to this system. Before moving onto the next topic of this section we should probably address why we even bothered with this method. It seems like a lot of work to solve a system of equations and when solving systems by hand it can be a lot of work. However, because the method for finding L and U is a fairly straightforward process and once those are found the method for solving the system is also very straightforward this is a perfect method for use in computer systems when programming the solution to systems. So, while it seems like a lot of work, it is a method that is very easy to program and so is a very useful method. The remaining topics in this section don’t really rely on previous sections as the first part of this section has. Instead we just need to look at a couple of ideas about solving systems that we didn’t have room to put into the section on solving systems of equations. Linear Algebra © 2007 Paul Dawkins 86 First we want to take a look at the following scenario. Suppose that we need so solve a system of equations only there are two or more sets of the i b ’s that we need to look at. For instance suppose we wanted to solve the following systems of equations. 1 2 k A A A = = = x b x b x b " Again, the coefficient matrix is the same for all these systems and the only thing that is different is the i b ’s. We could use any of the methods looked at so far to solve these systems. However, each of the methods we’ve looked at so far would require us to do each system individually and that could potentially lead to a lot of work. There is one method however that can be easily extended to solve multiple systems simultaneously provided they all have the same coefficient matrix. In fact the method is the very first one we looked at. In that method we solved systems by adding the column matrix b, onto the coefficient matrix and then reducing it to row-echelon or reduced row-echelon form. For the systems above this would require working with the following augmented matrices. [ ] [ ] [ ] 1 2 k A A A b b b " However, if you think about it almost the whole reduction process revolves around the columns in the augmented matrix that are associated with A and not the b column. So, instead of doing these individually let’s add all of them onto the coefficient matrix as follows. [ ] 1 2 k A b b b " All we need to do this is reduce this to reduced row-echelon form and we’ll have the answer to each of the systems. Let’s take a look at an example of this. Example 4 Find the solution to each of the following systems. 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 3 4 12 3 4 0 2 2 1 2 2 5 5 2 3 3 5 2 3 8 x x x x x x x x x x x x x x x x x x − + = − + = − − = − − − = − − = − − = − Solution So, we’ve got two systems with the same coefficient matrix so let’s form the following matrix. Note that we’ll leave the vertical bars in to make sure we remember the last two columns are really b’s for the systems we’re solving. 1 3 4 12 0 2 1 2 1 5 5 2 3 3 8 − ⎡ ⎤ ⎢ ⎥ − − − ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ Now, we just need to reduce this to reduced row-echelon form. Here is the work for that. 2 1 3 1 1 3 4 12 0 2 1 3 4 12 0 2 1 2 1 5 5 0 5 10 25 5 5 2 3 3 8 0 13 23 57 8 R R R R − − − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − − − − − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − → − − − ⎣ ⎦ ⎣ ⎦ Linear Algebra © 2007 Paul Dawkins 87 1 3 2 2 5 1 3 4 12 0 1 3 4 12 0 13 0 1 2 5 1 0 1 2 5 1 0 13 23 57 8 0 0 3 8 21 R R R − − ⎡ ⎤ ⎡ ⎤ − ⎢ ⎥ ⎢ ⎥ − − − − ⎢ ⎥ ⎢ ⎥ → →⎢ ⎥ ⎢ ⎥ − − − − ⎣ ⎦ ⎣ ⎦ 4 2 3 3 1 3 3 1 1 3 3 8 8 3 3 1 3 4 12 0 2 1 3 0 28 0 1 2 5 1 4 0 1 0 13 0 0 1 7 0 0 1 7 R R R R R − + − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − − − − ⎢ ⎥ ⎢ ⎥ →⎢ ⎥ ⎢ ⎥ − → − ⎣ ⎦ ⎣ ⎦ 7 3 1 2 1 3 8 3 1 0 0 11 3 0 1 0 13 0 0 1 7 R R − ⎡ ⎤ + ⎢ ⎥ − ⎢ ⎥ → ⎢ ⎥ − ⎣ ⎦ Okay from the solution to the first system is in the fourth column since that is the b for the first system and likewise the solution to the second system is in the fifth column. Therefore, the solution to the first system is, 1 2 3 7 1 8 3 3 3 x x x = = = and the solution to the second system is, 1 2 3 11 13 7 x x x = − = − = − The remaining topic to discuss in this section gives us a method for answering the following question. Given an n m × matrix A determine all the 1 m× matrices, b, for which A = x b is consistent, that is A = x b has at least one solution. This is a question that can arise fairly often and so we should take a look at how to answer it. Of course if A is invertible (and hence square) this answer is that A = x b is consistent for all b as we saw in an earlier section. However, what if A isn’t square or isn’t invertible? The method we’re going to look at doesn’t really care about whether or not A is invertible but it really should be pointed out that we do know the answer for invertible matrices. It’s easiest to see how these work with an example so let’s jump into one. Example 5 Determine the conditions (if any) on 1 b , 2 b , and 3 b in order for the following system to be consistent. 1 2 3 1 1 2 3 2 1 2 3 3 2 6 3 8 x x x b x x x b x x x b − + = − + − = − + + = Solution Okay, we’re going to use the augmented matrix method we first looked at here and reduce the matrix down to reduced row-echelon form. The final form will be a little messy because of the presence of the i b ’s but other than that the work is identical to what we’ve been doing to this point. Linear Algebra © 2007 Paul Dawkins 88 Here is the work. 1 2 1 1 2 3 1 2 1 3 3 1 1 2 6 1 2 6 1 1 1 3 0 1 5 3 1 8 0 5 26 3 b R R b b R R b b b b b − + − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − − + − + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − → − + ⎣ ⎦ ⎣ ⎦ 1 1 2 3 2 2 1 2 1 3 1 3 2 1 1 2 6 1 2 6 5 0 1 5 0 1 5 0 5 26 3 0 0 1 5 2 b b R R R b b b b b b b b b − − ⎡ ⎤ ⎡ ⎤ − + ⎢ ⎥ ⎢ ⎥ − − − − − − ⎢ ⎥ ⎢ ⎥ → → ⎢ ⎥ ⎢ ⎥ − + − − ⎣ ⎦ ⎣ ⎦ 2 3 3 2 1 3 2 1 1 2 1 3 3 2 1 3 2 1 3 2 1 3 2 1 5 1 2 0 6 30 13 1 0 0 4 22 9 2 6 0 1 0 5 26 11 0 1 0 5 26 11 0 0 1 5 2 0 0 1 5 2 R R b b b b b b R R R R b b b b b b b b b b b b + − − + + − − ⎡ ⎤ ⎡ ⎤ + ⎢ ⎥ ⎢ ⎥ − − − − − ⎢ ⎥ ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ → − − − − ⎣ ⎦ ⎣ ⎦ Okay, just what does this all mean? Well go back to equations and let’s see what we’ve got. 1 3 2 1 2 3 2 1 3 3 2 1 4 22 9 5 26 11 5 2 x b b b x b b b x b b b = − − = − − = − − So, what this says is that no matter what our choice of 1 b , 2 b , and 3 b we can find a solution using the general solution above and in fact there will always be exactly one solution to the system for a given choice of b. Therefore, there are no conditions on 1 b , 2 b , and 3 b in order for the system to be consistent. Note that the result of the previous example shouldn’t be too surprising given that the coefficient matrix is invertible. Now, we need to see what happens if the coefficient matrix is singular (i.e.not invertible). Example 6 Determine the conditions (if any) on 1 b , 2 b , and 3 b in order for the following system to be consistent. 1 2 3 1 1 2 3 2 1 2 3 3 3 2 5 3 2 8 3 x x x b x x x b x x x b + − = − − + = − + = Solution We’ll do this one in the same manner as the previous one. So, convert to an augmented matrix and start the reduction process. As we’ll see in this case we won’t need to go all the way to reduced row-echelon form to get the answer however. 1 2 1 1 2 3 1 2 1 3 3 1 1 3 2 1 3 2 1 5 3 2 0 2 1 2 8 3 0 14 7 2 b R R b b R R b b b b b − + − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − − − − + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − → − − ⎣ ⎦ ⎣ ⎦ Linear Algebra © 2007 Paul Dawkins 89 1 3 2 2 1 3 2 1 1 3 2 7 0 2 1 0 0 0 7 9 b R R b b b b b − ⎡ ⎤ − ⎢ ⎥ − + ⎢ ⎥ → ⎢ ⎥ − − ⎣ ⎦ Okay, let’s stop here and see what we’ve got. The last row corresponds to the following equation. 3 2 1 0 7 9 b b b = − − If the right side of this equation is NOT zero then this equation will not make any sense and so the system won’t have a solution. If however, it is zero then this equation will not be a problem and since we can take the first two rows and finish out the process to find a solution for any given values of 1 b and 2 b we’ll have a solution. This then gives us our condition that we’re looking for. In order for the system to have a solution, and hence be consistent, we must have 3 2 1 7 9 b b b = + Linear Algebra © 2007 Paul Dawkins 90 Determinants Introduction By this point in your mathematical career you should have run across functions. The functions that you’ve probably seen to this point have had the form ( ) f x where x is a real number and the output of the function is also a real number. Some examples of functions are ( ) 2 f x x = and ( ) ( ) ( ) cos 3 sin f x x x = − . Not all functions however need to take a real number as an argument. For instance we could have a function ( ) f X that takes a matrix X and outputs a real number. In this chapter we are going to be looking at one such function, the determinant function. The determinant function is a function that will associate a real number with a square matrix. The determinant function is a function that won’t be seeing all that often in the rest of this course, but it will show up on occasion. Here is a listing of the topics in this chapter. The Determinant Function – We will give the formal definition of the determinant in this section. We’ll also give formulas for computing determinants of 2 2 × and 3 3 × matrices. Properties of Determinants – Here we will take a look at quite a few properties of the determinant function. Included are formulas for determinants of triangular matrices. The Method of Cofactors – In this section we’ll take a look at the first of two methods form computing determinants of general matrices. Using Row Reduction to Find Determinants – Here we will take a look at the second method for computing determinants in general. Cramer’s Rule – We will take a look at yet another method for solving systems. This method will involve the use of determinants. Linear Algebra © 2007 Paul Dawkins 91 The Determinant Function We’ll start off the chapter by defining the determinant function. This is not such an easy thing however as it involves some ideas and notation that you probably haven’t run across to this point. So, before we actually define the determinant function we need to get some preliminaries out of the way. First, a permutation of the set of integers { } 1,2, ,n … is an arrangement of all the integers in the list without omission or repetitions. A permutation of { } 1,2, ,n … will typically be denoted by ( ) 1 2 , , , n i i i … where 1 i is the first number in the permutation, 2 i is the second number in the permutation, etc. Example 1 List all permutations of { } 1,2 . Solution This one isn’t too bad because there are only two integers in the list. We need to come up with all the possible ways to arrange these two numbers. Here they are. ( ) ( ) 1,2 2,1 Example 2 List all the permutations of { } 1,2,3 Solution This one is a little harder to do, but still isn’t too bad. We need all the arrangements of these three numbers in which no number is repeated or omitted. Here they are. ( ) ( ) ( ) ( ) ( ) ( ) 1,2,3 1,3,2 2,1,3 2,3,1 3,1,2 3,2,1 From this point on it can be somewhat difficult to find permutations for lists of numbers with more than 3 numbers in it. One way to make sure that you get all of them is to write down a permutation tree. Here is the permutation tree for{ } 1,2,3 . At the top we list all the numbers in the list and from this top number we’ll branch out with each of the remaining numbers in the list. At the second level we’ll again branch out with each of the numbers from the list not yet written down along that branch. Then each branch will represent a permutation of the given list of numbers As you can see the number of permutations for a list will quick grow as we add numbers to the list. In fact it can be shown that there are n! permutations of the list { } 1,2, ,n … , or any list containing n distinct numbers, but we’re going to be working with { } 1,2, ,n … so that’s the one Linear Algebra © 2007 Paul Dawkins 92 we’ll reference. So, the list { } 1,2,3,4 will have ( )( )( )( ) 4! 4 3 2 1 24 = = permutations, the list { } 1,2,3,4,5 will have ( )( )( )( )( ) 5! 5 4 3 2 1 120 = = permutations, etc. Next we need to discuss inversions in a permutation. An inversion will occur in the permutation ( ) 1 2 , , , n i i i … whenever a larger number precedes a smaller number. Note as well we don’t mean that the smaller number is immediately to the right of the larger number, but anywhere to the right of the larger number. Example 3 Determine the number of inversions in each of the following permutations. (a) ( ) 3,1,4,2 [Solution] (b) ( ) 1,2,4,3 [Solution] (c) ( ) 4,3,2,1 [Solution] (d) ( ) 1,2,3,4,5 [Solution] (e) ( ) 2,5,4,1,3 [Solution] Solution (a) ( ) 3,1,4,2 Okay, to count the number of inversions we will start at the left most number and count the number of numbers to the right that are smaller. We then move to the second number and do the same thing. We continue in this fashion until we get to the end. The total number of inversions are then the sum of all these. We’ll do this first one in detail and then do the remaining ones much quicker. We’ll mark the number we’re looking at in red and to the side give the number of inversions for that particular number. ( ) ( ) ( ) ,1,4,2 2 inversions 3, ,4,2 0 inversions 3,1, ,2 1 inver 3 1 4 sion In the first case there are two numbers to the right of 3 that are smaller than 3 so there are two inversions there. In the second case we’re looking at the smallest number in the list and so there won’t be any inversions there. Then with 4 there is one number to the right that is smaller than 4 and so we pick up another inversion. There is no reason to look at the last number in the permutation since there are no numbers to the right of it and so won’t introduce any inversions. The permutation ( ) 3,1,4,2 has a total of 3 inversions. [Return to Problems] (b) ( ) 1,2,4,3 We’ll do this one much quicker. There are 0 0 1 1 + + = inversions in ( ) 1,2,4,3 . Note that each number in the sum above represents the number of inversion for the number in that position in the permutation. [Return to Problems] Linear Algebra © 2007 Paul Dawkins 93 (c) ( ) 4,3,2,1 There are 3 2 1 6 + + = inversions in ( ) 4,3,2,1 . [Return to Problems] (d) ( ) 1,2,3,4,5 There are no inversions in ( ) 1,2,3,4,5 . [Return to Problems] (e) ( ) 2,5,4,1,3 There are 1 3 2 0 6 + + + = in ( ) 2,5,4,1,3 . [Return to Problems] Next, a permutation is called even if the number of inversions is even and odd if the number of inversions is odd. Example 4 Classify as even or odd all the permutations of the following lists. (a) { } 1,2 (b) { } 1,2,3 Solution (a) Here’s a table giving all the permutations, the number of inversions in each and the classification. Permutation # Inversions Classification ( ) 1,2 0 even ( ) 2,1 1 odd (b) We’ll do the same thing here Permutation # Inversions Classification ( ) 1,2,3 0 even ( ) 1,3,2 1 odd ( ) 2,1,3 1 odd ( ) 2,3,1 2 even ( ) 3,1,2 2 even ( ) 3,2,1 3 odd We’ll need these results later in the section. Alright, let’s move back into matrices. We still have some definitions to get out of the way before we define the determinant function, but at least we’re back dealing with matrices. Linear Algebra © 2007 Paul Dawkins 94 Suppose that we have an n n × matrix, A, then an elementary product from this matrix will be a product of n entries from A and none of the entries in the product can be from the same row or column. Example 5 Find all the elementary products for, (a) a 2 2 × matrix [Solution] (b) a 3 3 × matrix. [Solution] Solution (a) a 2 2 × matrix. Okay let’s first write down the general 2 2 × matrix. 11 12 21 22 a a A a a ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ Each elementary product will contain two terms and since each term must come from different rows we know that each elementary product must have the form, 1 2 a a All we need to do is fill in the column subscripts and remember in doing so that they must come from different columns. There are really only two possible ways to fill in the blanks in the product above. The two ways of filling in the blanks are ( ) 1,2 and ( ) 2,1 and yes we did mean to use the permutation notation there since that is exactly what we need. We will fill in the blanks with all the possible permutations of the list of column numbers, { } 1,2 in this case. So, the elementary products for a 2 2 × matrix are 11 22 12 21 a a a a [Return to Problems] (b) a 3 3 × matrix. Again, let’s start off with a general 3 3 × matrix for reference purposes. 11 12 13 21 22 23 31 32 33 a a a A a a a a a a ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Each of the elementary products in this case will involve three terms and again since the must all come from different rows we can again write down the form they must take. 1 2 3 a a a Again, each of the column subscripts will need to come from different columns and like the 2 2 × case we can get all the possible choices for these by filling in the blanks will all the possible permutations of { } 1,2,3 . Linear Algebra © 2007 Paul Dawkins 95 So, the elementary products of the 3 3 × are, 11 22 33 11 23 32 12 21 33 12 23 31 13 21 32 13 22 31 a a a a a a a a a a a a a a a a a a [Return to Problems] For a general an n n × matrix A, will have n! elementary products of the form 1 2 1 2 n n i i i a a a " where ( ) 1 2 , , , n i i i … ranges over all the permutations of { } 1,2, ,n … . We can now take care of the final preliminary definition that we need for the determinant function. A signed elementary product from A will be the elementary product 1 2 1 2 n ni i i a a a " that is multiplied by “+1” if ( ) 1 2 , , , n i i i … is an even permutation or multiplied by “-1” if ( ) 1 2 , , , n i i i … is an odd permutation. Example 6 Find all the signed elementary products for, (a) a 2 2 × matrix [Solution] (b) a 3 3 × matrix. [Solution] Solution We listed out all the elementary products in Example 5 and we classified all the permutations used in them as even or odd in Example 4. So, all we need to do is put all this information together for each matrix. (a) a 2 2 × matrix. Here are the signed elementary products for the 2 2 × matrix. Elementary Product Permutation Signed Elementary Product 11 12 a a ( ) 1,2 - even 11 12 a a 12 21 a a ( ) 2,1 - odd 12 21 a a − [Return to Problems] (b) a 3 3 × matrix. Here are the signed elementary products for the 3 3 × matrix. Elementary Product Permutation Signed Elementary Product 11 22 33 a a a ( ) 1,2,3 - even 11 22 33 a a a 11 23 32 a a a ( ) 1,3,2 - odd 11 23 32 a a a − Linear Algebra © 2007 Paul Dawkins 96 12 21 33 a a a ( ) 2,1,3 - odd 12 21 33 a a a − 12 23 31 a a a ( ) 2,3,1 - even 12 23 31 a a a 13 21 32 a a a ( ) 3,1,2 - even 13 21 32 a a a 13 22 31 a a a ( ) 3,2,1 - odd 13 22 31 a a a − [Return to Problems] Okay, we can now give the definition of the determinant function. Definition 1 If A is square matrix then the determinant function is denoted by det and det(A) is defined to be the sum of all the signed elementary matrices of A. Note that often we will call the number det(A) the determinant of A. Also, there is some alternate notation that is sometimes used for determinants. We will sometimes denote determinants as ( ) det A A = and this is most often done with the actual matrix instead of the letter representing the matrix. For instance for a 2 2 × matrix A we will use any of the following to denote the determinant, ( ) 11 21 12 22 det a a A A a a = = So, now that we have the definition of the determinant function in hand we can actually start writing down some formulas. We’ll give the formula for 2 2 × and 3 3 × matrices only because for any matrix larger than that the formula becomes very long and messy and at those sizes there are alternate methods for computing determinants that will be easier. So, with that said, we’ve got all the signed elementary products for 2 2 × and 3 3 × matrices listed in Example 6 so let’s write down the determinant function for these matrices. First the determinant function for a 2 2 × matrix. ( ) 11 21 11 22 12 21 12 22 det a a A a a a a a a = = − Now the determinant function for a 3 3 × matrix. ( ) 11 12 13 21 22 23 31 32 33 11 22 33 12 23 31 13 21 32 12 21 33 11 23 32 13 22 31 det a a a A a a a a a a a a a a a a a a a a a a a a a a a a = = + + − − − Okay, the formula for a 2 2 × matrix isn’t too bad, but the formula for a 3 3 × is messy and would not be fun to memorize. Fortunately, there is an easy way to quickly “derive” both of these formulas. Linear Algebra © 2007 Paul Dawkins 97 Before we give this quick trick to “derive” the formulas we should point out that what we’re going to do ONLY works for 2 2 × and 3 3 × matrices. There is no corresponding trick for larger matrices! Okay, let’s start with a 2 2 × matrix. Let’s examine the determinant below. Notice the two diagonals that we’ve put on this determinant. The diagonal that runs from left to right also covers the positive elementary product in the formula. Likewise, the diagonal that runs from right to left covers the negative elementary product. So, for a 2 2 × matrix all we need to do is write down the determinant, sketch in the diagonals multiply along the diagonals then add the product if the diagonal runs from left to right and subtract the product if the diagonal runs from right to left. Now let’s take a look at a 3 3 × matrix. There is a similar trick that will work here, but in order to get it to work we’ll first need to tack copies the first 2 columns onto the right side of the determinant as shown below. With the addition of the two extra columns we can see that we’ve got three diagonals running in each direction and that each will cover one of the elementary products for this matrix. Also, the diagonals that run from left to right cover the positive elementary products and those that run from right to left cover the negative elementary product. So, as with the 2 2 × matrix, we can quickly write down the determinant function formula here by simply multiplying along each diagonal and then adding it if the diagonal runs left to right or subtracting it if the diagonal runs right to left. Let’s take a quick look at a couple of examples with numbers just to make sure we can do these. Example 7 Compute the determinant of each of the following matrices. (a) 3 2 9 5 A ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ [Solution] (b) 3 5 4 2 1 8 11 1 7 B ⎡ ⎤ ⎢ ⎥ = − − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ [Solution] (c) 2 6 2 2 8 3 3 1 1 C − ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ [Solution] Linear Algebra © 2007 Paul Dawkins 98 Solution (a) 3 2 9 5 A ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ We don’t really need to sketch in the diagonals for 2 2 × matrices. The determinant is simply the product of the diagonal running left to right minus the product of the diagonal running from right to left. So, here is the determinant for this matrix. The only thing we need to worry about is paying attention to minus signs. It is easy to make a mistake with minus signs in these computations if you aren’t paying attention. ( ) ( )( ) ( )( ) det 3 5 2 9 33 A = − − = [Return to Problems] (b) 3 5 4 2 1 8 11 1 7 B ⎡ ⎤ ⎢ ⎥ = − − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Okay, with this one we’ll copy the two columns over and sketch in the diagonals to make sure we’ve got the idea of these down. Now, just remember to add products along the left to right diagonals and subtract products along the right to left diagonals. ( ) ( )( )( ) ( )( )( ) ( )( )( ) ( )( )( ) ( )( )( ) ( )( )( ) det 3 1 7 5 8 11 4 2 1 5 2 7 3 8 1 4 1 11 467 B = − + − + − − − − − − − = − [Return to Problems] (c) 2 6 2 2 8 3 3 1 1 C − ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ We’ll do this one with a little less detail. We’ll copy the columns but not bother to actually sketch in the diagonals this time. ( ) ( )( )( ) ( )( )( ) ( )( )( ) ( )( )( ) ( )( )( ) ( )( )( ) 2 6 2 2 6 det 2 8 3 2 8 3 1 1 3 1 2 8 1 6 3 3 2 2 1 6 2 1 2 3 1 2 8 3 0 C − − = − − − − = − + − − + −− − − − − = [Return to Problems] Linear Algebra © 2007 Paul Dawkins 99 As this example has shown determinants of matrices can be positive, negative or zero. It is again worth noting that there are no such tricks for computing determinants for matrices larger that 3 3 × In the remainder of this chapter we’ll take a look at some properties of determinants, two alternate methods for computing them that are not restricted by the size of the matrix as the two quick tricks we saw in this section were and an application of determinants. Linear Algebra © 2007 Paul Dawkins 100 Properties of Determinants In this section we’ll be taking a look at some of the basic properties of determinants and towards the end of this section we’ll have a nice test for the invertibility of a matrix. In this section we’ll give a fair number of theorems (and prove a few of them) as well as examples illustrating the theorems. Any proofs that are omitted are generally more involved than we want to get into in this class. Most of the theorems in this section will not help us to actually compute determinants in general. Most of these theorems are really more about how the determinants of different matrices will relate to each other. We will take a look at a couple of theorems that will help show us how to find determinants for some special kinds of matrices, but we’ll have to wait until the next two sections to start looking at how to compute determinants in general. All of the determinants that we’ll be computing in the examples in this section will be of a 2 2 × or a 3 3 × matrix. If you need a refresher on how to compute determinants of these kinds of matrices check out this example in the previous section. We won’t actually be showing any of that work here in this section. Let’s start with the following theorem. Theorem 1 Let A be an n n × matrix and c be a scalar then, ( ) ( ) det det n cA c A = Proof : This is a really simply proof. From the definition of the determinant function in the previous section we know that the determinant is the sum of all the signed elementary products for the matrix. So, for cA we will sum signed elementary products that are of the form, ( )( ) ( ) ( ) 1 2 1 2 1 2 1 2 n n n ni ni i i i i ca ca ca c a a a = " " Recall that for scalar multiplication we multiply all the entries by c and so we’ll have a c on each entry as shown above. Also, as shown, we can factor all n of the c’s out and we’ll get what we’ve shown above. Note that 1 2 1 2 n ni i i a a a " is the signed elementary product for A. Now, if we add all the signed elementary products for cA we can factor the n c that is on each term out of the sum and what we’re left with is the sum of all the signed elementary products of A, or in other words, we’re left with det(A). So, we’re done. Here’s a quick example to verify the results of this theorem. Linear Algebra © 2007 Paul Dawkins 101 Example 1 For the given matrix below compute both det(A) and det(2A). 4 2 5 1 7 10 0 1 3 A − ⎡ ⎤ ⎢ ⎥ = − − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Solution We’ll leave it to you to verify all the details of this problem. First the scalar multiple. 8 4 10 2 2 14 20 0 2 6 A − ⎡ ⎤ ⎢ ⎥ = − − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ The determinants. ( ) ( ) ( )( ) ( ) 3 det 45 det 2 360 8 45 2 det A A A = = = = Now, let’s investigate the relationship between det(A), det(B) and det(A+B). We’ll start with the following example. Example 2 Compute det(A), det(B) and det(A+B) for the following matrices. 10 6 1 2 3 1 5 6 A B − ⎡ ⎤ ⎡ ⎤ = = ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ ⎣ ⎦ Solution Here all the determinants. ( ) ( ) ( ) det 28 det 16 det 69 A B A B − = − + = − Notice here that for this example we have ( ) ( ) ( ) det det det A B A B + ≠ + . In fact this will generally be the case. There is a very special case where we will get equality for the sum of determinants, but it doesn’t happen all that often. Here is the theorem detailing this special case. Theorem 2 Suppose that A, B, and C are all n n × matrices and that they differ by only a row, say the kth row. Let’s further suppose that the kth row of C can be found by adding the corresponding entries from the kth rows of A and B. Then in this case we will have that ( ) ( ) ( ) det det det C A B = + The same result will hold if we replace the word row with column above. Here is an example of this theorem. Example 3 Consider the following three matrices. 4 2 1 4 2 1 4 2 1 6 1 7 2 5 3 4 4 10 1 3 9 1 3 9 1 3 9 A B C − − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = = − − = − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − − − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ First, notice that we can write C as, Linear Algebra © 2007 Paul Dawkins 102 ( ) ( ) 4 2 1 4 2 1 4 4 10 6 2 1 5 7 3 1 3 9 1 3 9 C − − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = − = + − + − + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − − ⎣ ⎦ ⎣ ⎦ All three matrices differ only in the second row and the second row of C can be found by adding the corresponding entries from the second row of A and B. The determinants of these matrices are, ( ) ( ) ( ) ( ) det 15 det 115 det 100 15 115 A B C = = − = − = + − Next let’s look at the relationship between the determinants of matrices and their products. Theorem 3 If A and B are matrices of the same size then ( ) ( ) ( ) det det det AB A B = This theorem can be extended out to as many matrices as we want. For instance, ( ) ( ) ( ) ( ) det det det det ABC A B C = Let’s check out an example of this. Example 4 For the given matrices compute det(A), det(B), and det(AB). 1 2 3 0 1 8 2 7 4 4 1 1 3 1 4 0 3 3 A B − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = = − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ Solution Here’s the product of the two matrices. 8 12 15 28 7 35 4 14 37 AB − ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Here are the determinants. ( ) ( ) ( ) ( )( ) ( ) ( ) det 41 det 84 det 3444 41 84 det det A B AB A B = − = = − = − = Here is a theorem relating determinants of matrices and their inverse (provided the matrix is invertible of course…). Theorem 4 Suppose that A is an invertible matrix then, ( ) ( ) 1 1 det det A A − = Linear Algebra © 2007 Paul Dawkins 103 Proof : The proof of this theorem is a direct result of the previous theorem. Since A is invertible we know that 1 AA I −= . So take the determinant of both sides and then use the previous theorem on the left side. ( ) ( ) ( ) ( ) 1 1 det det det det AA A A I − − = = Now, all that we need is to know that ( ) det 1 I = which you can prove using Theorem 8 below. ( ) ( ) ( ) ( ) 1 1 1 det det 1 det det A A A A − − = ⇒ = Here’s a quick example illustrating this. Example 5 For the given matrix compute det(A) and ( ) 1 det A− . 8 9 2 5 A − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ Solution We’ll leave it to you to verify that A is invertible and that its inverse is, 5 9 58 58 1 1 4 29 29 A− ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ Here are the determinants for both of these matrices. ( ) ( ) ( ) 1 1 1 det 58 det 58 det A A A − = = = The next theorem that we want to take a look at is a nice test for the invertibility of matrices. Theorem 5 A square matrix A is invertible if and only if ( ) det 0 A ≠ . A matrix that is invertible is often called non-singular and a matrix that is not invertible is often called singular. Before doing an example of this let’s talk a little bit about the phrase “if and only if” that appears in this theorem. That phrase means that this is kind of like a two way street. This theorem, because of the “if and only if” phrase, says that if we know that A is invertible then we will have ( ) det 0 A ≠ . If, on the other hand, we know that ( ) det 0 A ≠ then we will also know that A is invertible. Most theorems in this presented in these notes are not “two way streets” so to speak. They only work one way, if however, we do have a theorem that does work both ways you will always be able to identify it by the phrase “if and only if”. Now let’s work an example to verify this theorem. Linear Algebra © 2007 Paul Dawkins 104 Example 6 Compute the determinants of the following two matrices. 3 1 0 3 3 6 1 2 2 0 1 2 5 0 1 2 0 0 C B ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ Solution We determined the invertibility of both of these matrices in the section on Finding Inverses so we already know what the answers should be (at some level) for the determinants. In that section we determined that C was invertible and so by Theorem 5 we know that the det(C) should be non-zero. We also determined that B was singular (i.e. not invertible) and so we know by Theorem 5 that det(B) should be zero. Here are those determinants of these two matrices. ( ) ( ) det 3 det 0 C B = = Sure enough we got zero where we should have and didn’t get zero where we should have. Here is a theorem relating the determinants of a matrix and its transpose. Theorem 6 If A is a square matrix then, ( ) ( ) det det T A A = Here is an example that verifies the results of this theorem. Example 7 Compute det(A) and ( ) det T A for the following matrix. 5 3 2 1 8 6 0 1 1 A ⎡ ⎤ ⎢ ⎥ = − − − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Solution We’ll leave it to you to verify that ( ) ( ) det det 9 T A A = = − There are a couple special cases of matrices that we can quickly find the determinant for so let’s take care of those at this point. Theorem 7 If A is a square matrix with a row or column of all zeroes then ( ) det 0 A = and so A will be singular. Proof : The proof here is fairly straight forward. The determinant is the sum of all the signed elementary products and each of these will have a factor from each row and a factor from each column. So, in particular it will have a factor from the row or column of all zeroes and hence will have a factor of zero making the whole product zero. All of the products are zero and upon summing them up we will also get zero for the determinant. Linear Algebra © 2007 Paul Dawkins 105 Note that in the following example we don’t need to worry about the size of the matrix now since this theorem gives us a value for the determinant. You might want to check the 2 2 × and 3 3 × to verify that the determinants are in fact zero. You also might want to come back and verify the other after the next section where we’ll learn methods for computing determinants in general. Example 8 Each of the following matrices are singular. 4 12 8 0 5 0 1 3 9 5 3 1 2 9 0 2 0 0 0 0 0 0 4 0 3 5 1 3 6 A B C ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = = − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎢ ⎥ − ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ It is actually very easy to compute the determinant of any triangular (and hence any diagonal) matrix. Here is the theorem that tells us how to do that. Theorem 8 Suppose that A is an n n × triangular matrix then, ( ) 11 22 det nn A a a a = " So, what this theorem tells us is that the determinant of any triangular matrix (upper or lower) or any diagonal matrix is simply the product of the entries from the matrices main diagonal. We won’t do a formal proof here. We’ll just give a quick outline. Proof Outline : Since we know that the determinant is the sum of the signed elementary products and each elementary products has a factor from each row and a factor from each column because of the triangular nature of the matrix, the only elementary product that won’t have at least one zero is 11 22 nn a a a " . All the others will have at least one zero in them. Hence the determinant of the matrix must be ( ) 11 22 det nn A a a a = " Let’s take the determinant of a couple of triangular matrices. You should verify the 2 2 × and 3 3 × matrices and after the next section come back and verify the other. Example 9 Compute the determinant of each of the following matrices. 10 5 1 3 5 0 0 6 0 0 0 4 9 0 3 0 2 1 0 0 6 4 0 0 4 0 0 0 5 A B C ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = − = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ Solution Here are these determinants. ( ) ( )( )( ) ( ) ( )( ) ( ) ( )( )( )( ) det 5 3 4 60 det 6 1 6 det 10 0 6 5 0 A B C = − = − = − = − = = Linear Algebra © 2007 Paul Dawkins 106 We have one final theorem to give in this section. In the Finding Inverse section we gave a theorem that listed several equivalent statements. Because of Theorem 5 above we can add a statement to that theorem so let’s do that. Here is the improved theorem. Theorem 9 If A is an n n × matrix then the following statements are equivalent. (a) A is invertible. (b) The only solution to the system 0 A = x is the trivial solution. (c) A is row equivalent to n I . (d) A is expressible as a product of elementary matrices. (e) A = x b has exactly one solution for every 1 n× matrix b. (f) A = x b is consistent for every 1 n× matrix b. (g) ( ) det 0 A ≠ Linear Algebra © 2007 Paul Dawkins 107 The Method of Cofactors In this section we’re going to examine one of the two methods that we’re going to be looking at for computing the determinant of a general matrix. We’ll also see how some of the ideas we’re going to look at in this section can be used to determine the inverse of an invertible matrix. So, before we actually give the method of cofactors we need to get a couple of definitions taken care of. Definition 1 If A is a square matrix then the minor of i j a , denoted by i j M , is the determinant of the submatrix that results from removing the ith row and jth column of A. Definition 2 If A is a square matrix then the cofactor of i j a , denoted by i j C , is the number ( ) 1 i j i j M + − . Let’s take a look at computing some minors and cofactors. Example 1 For the following matrix compute the cofactors 12 C , 24 C , and 32 C . 4 0 10 4 1 2 3 9 5 5 1 6 3 7 1 2 A ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ − − ⎢ ⎥ − ⎣ ⎦ Solution In order to compute the cofactors we’ll first need the minor associated with each cofactor. Remember that in order to compute the minor we will remove the ith row and jth column of A. So, to compute 12 M (which we’ll need for 12 C ) we’ll need to compute the determinate of the matrix we get by removing the 1st row and 2nd column of A. Here is that work. We’ve marked out the row and column that we eliminated and we’ll leave it to you to verify the determinant computation. Now we can get the cofactor. ( ) ( ) ( ) 1 2 3 12 12 1 1 160 160 C M + = − = − = − Let’s now move onto the second cofactor. Here is the work for the minor. Linear Algebra © 2007 Paul Dawkins 108 The cofactor in this case is, ( ) ( ) ( ) 2 4 6 24 24 1 1 508 508 C M + = − = − = Here is the work for the final cofactor. ( ) ( ) ( ) 3 2 5 32 32 1 1 150 150 C M + = − = − = − Notice that the cofactor is really just i j M ± depending upon i and j. If the subscripts of the cofactor add to an even number then we leave the minor alone (i.e. no “-” sign) when writing down the cofactor. Likewise, if the subscripts on the cofactor sum to an odd number then we add a “-” to the minor when writing down the cofactor. We can use this fact to derive a table that will allow us to quickly determine whether or not we should add a “-” onto the minor or leave it alone when writing down the cofactor. Let’s start with 11 C . In this case the subscripts sum to an even number and so we don’t tack on a minus sign to the minor. Now, let’s move along the first row. The next cofactor would then be 12 C and in this case the subscripts add to an odd number and so we tack on a minus sign to the minor. For the next cofactor, 13 C , we would leave the minor alone and for the next, 14 C , we’d tack a minus sign on, etc. As you can see from this work, if we start at the leftmost entry of the first row we have a “+” in front of the minor and then as we move across the row the signs alternate. If you think about it, this will also happen as we move down the first column. In fact, this will happen as we move across any row and down any column. We can summarize this idea in the following “sign matrix” that will tell us if we should leave the minor alone (i.e. tack on a “+”) or change its sign (i.e. tack on a “-”) when writing down the cofactor. + − + − ⎡ ⎤ ⎢ ⎥ − + − + ⎢ ⎥ ⎢ ⎥ + − + − ⎢ ⎥ − + − + ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ " " " " # # # # % Okay, we can now talk about how to use cofactors to compute the determinant of a general square matrix. In fact there are two ways we can used cofactors as the following theorem shows. Linear Algebra © 2007 Paul Dawkins 109 Theorem 1 If A is an n n × matrix. (a) Choose any row, say row i, then, ( ) 1 1 2 2 det i i i i in in A a C a C a C = + +" (b) Choose any column, say column j, then, ( ) 1 1 2 2 det j j j j n j n j A a C a C a C = + + + " What this theorem tells us is that if we pick any row all we need to do is go across that row and multiply each entry by its cofactor, add all these products up and we’ll have the determinant for the matrix. It also says that we could do the same thing only instead of going across any row we could move down any column. The process of moving across a row or down a column is often called a cofactor expansion. Let’s work some examples of this so we can see it in action. Example 2 For the following matrix compute the determinant using the given cofactor expansions. 4 2 1 2 6 3 7 5 0 A ⎡ ⎤ ⎢ ⎥ = − − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ (a) Expand along the first row. [Solution] (b) Expand along the third row. [Solution] (c) Expand along the second column. [Solution] Solution First, notice that according to the theorem we should get the same result in all three parts. (a) Expand along the first row. Here is the cofactor expansion in terms of symbols for this part. ( ) 11 11 12 12 13 13 det A a C a C a C = + + Now, let’s plug in for all the quantities. We will just plug in for the entries. For the cofactors we’ll write down the minor and a “+1” or a “-1” depending on which sign each minor needs. We’ll determine these signs by going to our “sign matrix” above starting at the first entry in the particular row/column we’re expanding along and then as we move along that row or column we’ll write down the appropriate sign. Here is the work for this expansion. ( ) ( )( ) ( )( ) ( )( ) ( ) ( ) ( )( ) 6 3 2 3 2 6 det 4 1 2 1 1 1 5 0 7 0 7 5 4 15 2 21 1 52 154 A − − − − = + + − + + − − = − − + − = − Linear Algebra © 2007 Paul Dawkins 110 We’ll leave it to you to verify the 2 2 × determinant computations. [Return to Problems] (b) Expand along the third row. We’ll do this one without all the explanations. ( ) ( )( ) ( )( ) ( )( ) ( ) ( ) ( )( ) 31 31 32 32 33 33 det 2 1 4 1 4 2 7 1 5 1 0 1 6 3 2 3 2 6 7 12 5 14 0 20 154 A a C a C a C = + + − + + − + + − − − − = − − + − = − So, the same answer as the first part which is good since that was supposed to happen. Notice that the signs for the cofactors in this case were the same as the signs in the first case. This is because the first and third row of our “sign matrix” are identical. Also, notice that we didn’t really need to compute the third cofactor since the third entry was zero. We did it here just to get one more example of a cofactor into the notes. [Return to Problems] (c) Expand along the second column. Let’s take a look at the final expansion. In this one we’re going down a column and notice that from our “sign matrix” that this time we’ll be starting the cofactor signs off with a “-1” unlike the first two expansions. ( ) ( )( ) ( )( ) ( )( ) ( ) ( ) ( ) 12 12 22 22 32 32 det 2 3 4 1 4 1 2 1 6 1 5 1 7 0 7 0 2 3 2 21 6 7 5 14 154 A a C a C a C = + + − − + − + + − − − − = − − − = − Again, the same as the first two as we expected. [Return to Problems] There was another point to the previous problem apart from showing that the row or column we choose to expand along won’t matter. Because we are allowed to expand along any row that means unless the problem statement forces to use a particular row or column we will get to choose the row/column to expand along. When choosing we should choose a row/column that will reduce the amount of work we’ve got to do if possible. Comparing the parts of the previous example should suggest to us something we should be looking for in making this choice. In part (b) it was pointed out that we didn’t really need to compute the third cofactor since the third entry in that row was zero. Choosing to expand along a row/column with zeroes in it will instantly cut back on the number of cofactors that we’ll need to compute. So, when allowed to choose which row/column to expand Linear Algebra © 2007 Paul Dawkins 111 along we should look for the one with the most zeroes. In the case of the previous example that means that the quickest expansions would be either the 3rd row or the 3rd column since both of those have a zero in them and none of the other rows/columns do. So, let’s take a look at a couple more examples. Example 3 Using a cofactor expansion compute the determinant of, 5 2 2 7 1 0 0 3 3 1 5 0 3 1 9 4 A − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ − ⎢ ⎥ − − ⎣ ⎦ Solution Since the row or column to use for the cofactor expansion was not given in the problem statement we get to choose which one we want to use. Recalling the brief discussion after the last example we know that we want to choose the row/column with the most zeroes in it since that will mean we won’t have to compute cofactors for each entry that is a zero. So, it looks like the second row would be a good choice for the expansion since it has two zeroes in it. Here is the expansion for this row. As with the previous expansions we’ll explicitly give the “+1” or “-1” for the cofactors and the minors as well so you can see where everything in the expansion is coming from. ( ) ( )( ) ( )( ) ( )( ) ( )( ) 22 23 2 2 7 5 2 2 det 1 1 1 5 0 0 1 0 1 3 1 3 1 5 1 9 4 3 1 9 A M M − − = − + + + − + + − − − − We didn’t bother to write down the minors 22 M and 23 M because of the zero entry. How we choose to compute the determinants for the first and last entry is up to us at this point. We could use a cofactor expansion on each of them or we could use the technique we learned in the first section of this chapter. Either way will get the same answer and we’ll leave it to you to verify these determinants. The determinant for this matrix is, ( ) ( ) ( ) det 76 3 4 88 A = −− + = Example 4 Using a cofactor expansion compute the determinant of, 2 2 0 3 4 4 1 0 1 1 0 5 0 0 1 3 2 3 4 3 7 2 0 9 5 B − ⎡ ⎤ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ = − ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ Solution This is a large matrix, but if you check out the third column we’ll see that there is only one non-zero entry in that column and so that looks like a good column to do a cofactor expansion on. Here’s the cofactor expansion for this matrix. Again, we explicitly added in the “+1” and “-1” Linear Algebra © 2007 Paul Dawkins 112 and won’t bother to write down the minors for the zero entries. ( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) 13 23 33 53 det 0 1 0 1 0 1 2 2 3 4 4 1 1 1 3 1 0 1 0 5 0 1 7 2 9 5 B M M M M = + + − + + + − − − − − + + − − − Now, in order to complete this problem we’ll need to take the determinant of a 4 4 × matrix and the only way that we’ve got to do that is to once again do a cofactor expansion on it. In this case it looks like the third row will be the best option since it’s got more zero entries than any other row or column. This time we’ll just put in the terms that come from non-zero entries. Here is the remainder of this problem. Also don’t forget that there is still a coefficient of 3 in front of this determinant! ( ) ( )( ) ( )( ) ( ) ( )( ) ( ) 2 2 3 4 4 1 1 1 det 3 0 5 0 1 7 2 9 5 2 3 4 2 2 3 3 5 1 4 1 1 1 1 4 1 1 7 9 5 7 2 9 3 5 163 1 41 2322 B − − − = − − − ⎛ − ⎞ ⎜ ⎟ = − −+ − − − ⎜ ⎟ ⎜ ⎟ − − ⎝ ⎠ = − + = − This last example has shown one of the drawbacks to this method. Once the size of the matrix gets large there can be a lot of work involved in the method. Also, for anything larger than a 4 4 × matrix you are almost assured of having to do cofactor expansions multiple times until the size of the matrix gets down to 3 3 × and other methods can be used. There is a way to simplify things down somewhat, but we’ll need to the topic of the next section before we can show that. Now let’s move onto the final topic of this section. It turns out that we can also use cofactors to determine the inverse of an invertible matrix. To see how this is done we’ll first need a quick definition. Linear Algebra © 2007 Paul Dawkins 113 Definition 3 Let A be an n n × matrix and i j C be the cofactor of i j a . The matrix of cofactors from A is, 11 12 1 21 22 2 1 2 n n n n nn C C C C C C C C C ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ " " # # % # " The adjoint of A is the transpose of the matrix of cofactors and is denoted by adj(A). Example 5 Compute the adjoint of the following matrix. 4 2 1 2 6 3 7 5 0 A ⎡ ⎤ ⎢ ⎥ = − − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Solution We need the cofactors for each of the entries from this matrix. This is the matrix from Example 2 and in that example we computed all the cofactors except for 21 C and 23 C so here are those computations. ( ) ( )( ) ( ) ( )( ) 21 23 2 1 1 1 5 5 5 0 4 2 1 1 34 34 7 5 C C = − = − − = = − = − = − − Here are the others from Example 2. 11 12 13 22 31 32 33 15 21 52 7 12 14 20 C C C C C C C = − = − = − = = = − = − The matrix of cofactors is then, 15 21 52 5 7 34 12 14 20 − − − ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ The adjoint is then, ( ) 15 5 12 adj 21 7 14 52 34 20 A − ⎡ ⎤ ⎢ ⎥ = − − ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ We started this portion of this section off by saying that we were going to see how to use cofactors to determine the inverse of a matrix. Here is the theorem that will tell us how to do that. Linear Algebra © 2007 Paul Dawkins 114 Theorem 2 If A is an invertible matrix then ( ) ( ) 1 1 adj det A A A −= Example 6 Use the adjoint matrix to compute the inverse of the following matrix. 4 2 1 2 6 3 7 5 0 A ⎡ ⎤ ⎢ ⎥ = − − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Solution We’ve done most of the work for this problem already. In Example 2 we determined that ( ) det 154 A = − and in Example 5 we found the adjoint to be ( ) 15 5 12 adj 21 7 14 52 34 20 A − ⎡ ⎤ ⎢ ⎥ = − − ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ Therefore, the inverse of the matrix is, 15 5 6 154 154 77 1 3 1 1 22 22 11 26 17 10 77 77 77 15 5 12 1 21 7 14 154 52 34 20 A− − − − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = − − = − ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ ⎣ ⎦ You might want to verify this using the row reduction method we used in the previous chapter for the practice. Linear Algebra © 2007 Paul Dawkins 115 Using Row Reduction To Compute Determinants In this section we’ll take a look at the second method for computing determinants. The idea in this section is to use row reduction on a matrix to get it down to a row-echelon form. Since we’re computing determinants we know that the matrix, A, we’re working with will be square and so the row-echelon form of the matrix will be an upper triangular matrix and we know how to quickly compute the determinant of a triangular matrix. So, since we already know how to do row reduction all we need to know before we can work some problems is how the row operations used in the row reduction process will affect the determinant. Before proceeding we should point out that there are a set of elementary column operations that mirror the elementary row operations. We can multiply a column by a scalar, c, we can interchange two columns and we add a multiple of one column onto another column. The operations could just as easily be used as row operations and so all the theorems in this section will make note of that. We’ll just be using row operations however in our examples. Here is the theorem that tells us how row or column operations will affect the value of the determinant of a matrix. Theorem 1 Let A be a square matrix. (a) If B is the matrix that results from multiplying a row or column of A by a scalar, c, then ( ) ( ) det det B c A = (b) If B is the matrix that results from interchanging two rows or two columns of A then ( ) ( ) det det B A = − (c) If B is the matrix that results from adding a multiple of one row of A onto another row of A or adding a multiple of one column of A onto another column of A then ( ) ( ) det det B A = Notice that the row operation that we’ll be using the most in the row reduction process will not change the determinant at all. The operations that we’re going to need to worry about are the first two and the second is easy enough to take care of. If we interchange two rows the determinant changes by a minus sign. We are going to have to be a little careful with the first one however. Let’s check out an example of how this method works in order to see what’s going on. Example 1 Use row reduction to compute the determinant of the following matrix. 4 12 7 5 A ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ Solution There is of course no real reason to do row reduction on this matrix in order to compute the determinant. We can find it easily enough at this point. In fact let’s do that so we can check the results of our work after we do row reduction on this. ( ) ( )( ) ( )( ) det 4 5 7 12 104 A = −− = Okay, now let’s do with row reduction to see what we’ve got. We need to reduce this down to row-echelon form and while we could easily use a multiple of the third row to get a 1 in the first entry of the first row let’s just divide the first row by 4 since that’s the one operation we’re going Linear Algebra © 2007 Paul Dawkins 116 to need to careful with. So, let’s do the first operation and see what we’ve got. 1 1 4 4 12 1 3 7 5 7 5 R A B ⎡ ⎤ ⎡ ⎤ = = ⎢ ⎥ ⎢ ⎥ − − → ⎣ ⎦ ⎣ ⎦ So, we called the result B and let’s see what the determinant of this matrix is. ( ) ( )( ) ( )( ) ( ) 1 det 1 5 7 3 26 det 4 B A = −− = = So, the results of the theorem are verified for this step. The next step is then to convert the -7 into a zero. Let’s do that and see what we get. 2 1 1 3 7 1 3 7 5 0 26 R R B C + ⎡ ⎤ ⎡ ⎤ = = ⎢ ⎥ ⎢ ⎥ − → ⎣ ⎦ ⎣ ⎦ According to the theorem C should have the same determinant as B and it does (you should verify this statement). The final step is to convert the 26 into a 1. 1 2 26 1 3 1 3 0 26 0 1 R C D ⎡ ⎤ ⎡ ⎤ = = ⎢ ⎥ ⎢ ⎥ → ⎣ ⎦ ⎣ ⎦ Now, we’ve got the following, ( ) ( ) 1 det 1 det 26 D C = = Once again the theorem is verified. Now, just how does all of this help us to find the determinant of the original matrix? We could work our way backwards from det(D) and figure out what det(A) is. However, there is a way to modify our work above that will allow us to also get the answer once we reach row-echelon form. To see how we do this let’s go back to the first operation that we did and we saw when we were done we had, ( ) ( ) ( ) ( ) 1 det det OR det 4det 4 B A A B = = Written in another way this is, ( ) ( ) ( ) 4 12 1 3 det 4 det 7 5 7 5 A B = = = − − Notice that the determinants, when written in the “matrix” form, are pretty much what we originally wrote down when doing the row operation. Therefore, instead of writing down the row operation as we did above let’s just use this “matrix” form of the determinant and write the row operation as follows. ( ) ( ) 1 1 4 4 12 1 3 det 4 7 5 7 5 R A = − − = Linear Algebra © 2007 Paul Dawkins 117 In going from the matrix on the left to the matrix on the right we performed the operation 1 1 4 R and in the process we changed the value of the determinant. So, since we’ve got an equal sign here we need to also modify the determinant of the matrix on the right so that it will remain equal to the determinant of the matrix on the left. As shown above, we can do this by multiplying the matrix on the right by the reciprocal of the scalar we used in the row operation. Let’s complete this and notice that in the second step we aren’t going to change the value of the determinant since we’re adding a multiple of the second row onto the first row so we’ll not change the value of the determinant on the right. In the final operation we divided the second row by 26 and so we’ll need to multiply the determinant on the right by 26 to persevere the equality of the determinants. Here is the complete work for this problem using these ideas. ( ) ( ) ( ) ( )( ) 1 1 4 2 1 1 2 26 4 12 1 3 det 4 7 5 7 5 7 1 3 4 0 26 1 3 4 26 0 1 R A R R R = − − = + = = Okay, we’re down to row-echelon form so let’s strip out all the intermediate steps out and see what we’ve got. ( ) ( )( ) 1 3 det 4 26 0 1 A = The matrix on the right is triangular and we know that determinants of triangular matrices are just the product of the main diagonal entries and so the determinant of A is, ( ) ( )( )( )( ) det 4 26 1 1 104 A = = Now, that was a lot of work to compute the determinant and in general we wouldn’t use this method on a 2 2 × matrix, but by doing it on one here it allowed us to investigate the method in a detail without having to deal with a lot of steps. There are a couple of issues to point out before we move into another more complicated problem. First, we didn’t do any row interchanges in the above example, but the theorem tells us that will only change the sign on the determinant. So, if we do a row interchange in our work we’ll just tack a minus sign onto the determinant. Second, we took the matrix all the way down to row-echelon form, but if you stop to think about it there’s really nothing special about that in this case. All we need to do is reduce the matrix to a triangular matrix and then use the fact that can quickly find the determinant of any triangular matrix. Linear Algebra © 2007 Paul Dawkins 118 From this point on we’ll not be going all the way to row-echelon form. We’ll just make sure that we reduce the matrix down to a triangular matrix and then stop and compute the determinant. Example 2 Use row reduction to compute the determinant of the following matrix. 2 10 2 1 0 7 0 3 5 A − ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Solution We’ll do this one with less explanation. Just remember that if we interchange rows tack a minus sign onto the determinant and if we multiply a row by a scalar we’ll need to multiply the new determinant by the reciprocal of the scalar. ( ) ( ) ( ) 1 2 2 1 1 2 10 8 5 3 2 8 5 49 5 2 10 2 1 0 7 det 1 0 7 2 10 2 0 3 5 0 3 5 1 0 7 2 0 10 16 0 3 5 1 0 7 10 0 1 0 3 5 1 0 7 3 10 0 1 0 0 R R A R R R R R − ↔ = −− = − − + − = − − = − + − = Okay, we’ve gotten the matrix down to triangular form and so at this point we can stop and just take the determinant of that and make sure to keep the scalars that are multiplying it. Here is the final computation for this problem. ( ) ( )( ) 49 det 10 1 1 98 5 A ⎛ ⎞ = − = − ⎜ ⎟ ⎝ ⎠ Example 3 Use row reduction to compute the determinant of the following matrix. 3 0 6 3 0 2 3 0 4 7 2 0 2 0 1 10 A − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ − − ⎢ ⎥ ⎣ ⎦ Solution Okay, there’s going to be some work here so let’s get going on it. Linear Algebra © 2007 Paul Dawkins 119 ( ) ( ) 1 1 3 3 0 6 3 1 0 2 1 0 2 3 0 0 2 3 0 det 3 4 7 2 0 4 7 2 0 2 0 1 10 2 0 1 10 R A − − = − − − − = ( ) 3 1 4 1 1 0 2 1 4 0 2 3 0 2 3 0 7 10 4 0 0 3 12 R R R R − + − − − = − ( )( ) 3 1 2 2 2 1 0 2 1 0 1 0 3 2 0 7 10 4 0 0 3 12 R − − − = − ( )( ) 3 3 2 2 41 2 1 0 2 1 7 0 1 0 3 2 0 0 4 0 0 3 12 R R − + = − − ( )( ) 3 2 3 2 41 8 41 1 0 2 1 0 1 0 41 3 2 0 0 1 2 0 0 3 12 R − ⎛ ⎞ ⎜ ⎟ − = ⎝ ⎠ − ( )( ) 3 4 3 2 8 41 468 41 1 0 2 1 0 1 0 3 41 3 2 0 0 1 2 0 0 0 R R − + ⎛ ⎞ ⎜ ⎟ − = ⎝ ⎠ Okay, that was a lot of work, but we’ve gotten it into a form we can deal with. Here’s the determinant. ( ) ( )( ) 41 468 det 3 2 1404 2 41 A ⎛ ⎞⎛ ⎞ = = ⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ Now, as the previous example has shown us, this method can be a lot of work and its work that if we aren’t paying attention it will be easy to make a mistake. There is a method that we could have used here to significantly reduce our work and it’s not even a new method. Notice that with this method at each step we have a new determinant that needs computing. We continued down until we got a triangular matrix since that would be easy for us to compute. However, there’s nothing keeping us from stopping at any step and using some other method for computing the determinant. In fact, if you look at our work, after the second step we’ve gotten a column with a 1 in the first entry and zeroes below it. If we were in the previous Linear Algebra © 2007 Paul Dawkins 120 section we’d just do a cofactor expansion along this column for this determinant. So, let’s do that. No one ever said we couldn’t mix the methods from this and the previous section in a problem. Example 4 Use row reduction and a cofactor expansion to compute the determinant of the matrix in Example 3. Solution. Okay, this “new” method says to use row reduction until we get a matrix that would be easy to do a cofactor expansion on. As noted earlier that means only doing the first two steps. So, for the sake of completeness here are those two steps again. ( ) ( ) 1 1 3 3 0 6 3 1 0 2 1 0 2 3 0 0 2 3 0 det 3 4 7 2 0 4 7 2 0 2 0 1 10 2 0 1 10 R A − − = − − − − = ( ) 3 1 4 1 1 0 2 1 4 0 2 3 0 2 3 0 7 10 4 0 0 3 12 R R R R − + − − − = − At this point we’ll just do a cofactor expansion along the first column. ( ) ( ) ( )( ) ( ) ( ) ( ) 21 31 41 2 3 0 det 3 1 1 7 10 4 0 0 0 0 3 12 2 3 0 3 7 10 4 0 3 12 A C C C ⎛ ⎞ ⎜ ⎟ = + − − + + + ⎜ ⎟ ⎜ ⎟ − ⎝ ⎠ = − − − At this point we can use any method to compute the determinant of the new 3 3 × matrix so we’ll leave it to you to verify that ( ) ( )( ) det 3 468 1404 A = = There is one final idea that we need to discuss in this section before moving on. Theorem 2 Suppose that A is a square matrix and that two of its rows are proportional or two of its columns are proportional. Then ( ) det 0 A = . When we say that two rows or two columns are proportional that means that one of the rows(columns) is a scalar times another row(column) of the matrix. We’re not going to prove this theorem but it you think about it, it should make some sense. Let’s suppose that two rows are proportional. So we know that one of the rows is a scalar multiple of another row. This means we can use the third row operation to make one of the rows all zero. Linear Algebra © 2007 Paul Dawkins 121 From Theorem 1 above we know that both of these matrices must have the same determinant and from Theorem 7 from the Determinant Properties section we know that if a matrix has a row or column of all zeroes, then that matrix is singular, i.e. its determinant is zero. Therefore both matrices must have a zero determinant. Here is a quick example showing this. Example 5 Show that the following matrix is singular. 4 1 3 2 5 1 8 2 6 A − ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ Solution We can use Theorem 2 above upon noticing that the third row is -2 times the first row. That’s all we need to use this theorem. So, technically we’ve answered the question. However, let’s go through the steps outlined above to also show that this matrix is singular. To do this we’d do one row reduction step to get the row of all zeroes into the matrix as follows. ( ) 3 1 4 1 3 4 1 3 2 det 2 5 1 2 5 1 8 2 6 0 0 0 R R A − − + = − − = − − We know by Theorem 1 above that these two matrices have the same determinant. Then because we see a row of all zeroes we can invoke Theorem 7 from the Determinant Properties to say that the determinant on the right must be zero, and so be singular. Then, as we pointed out, these two matrices have the same determinant and so we’ve also got ( ) det 0 A = and so A is singular. You might want to verify that this matrix is singular by computing its determinant with one of the other methods we’ve looked at for the practice. We’ve now looked at several methods for computing determinants and as we’ve seen each can be long and prone to mistakes. On top of that for some matrices one method may work better than the other. So, when faced with a determinant you’ll need to look at it and determine which method to use and unless otherwise specified by the problem statement you should use the one that you find the easiest to use. Note that this may not be the method that somebody else chooses to use, but you shouldn’t worry about that. You should use the method you are the most comfortable with. Linear Algebra © 2007 Paul Dawkins 122 Cramer’s Rule In this section we’re going to come back and take one more look at solving systems of equations. In this section we’re actually going to be able to get a general solution to certain systems of equations. It won’t work on all systems of equations and as we’ll see if the system is too large it will probably be quicker to use one of the other methods that we’ve got for solving systems of equations. So, let’s jump into the method. Theorem 1 Suppose that A is an n n × invertible matrix. Then the solution to the system A = x b is given by, ( ) ( ) ( ) ( ) ( ) ( ) 1 2 1 2 det det det , , , det det det n n A A A x x x A A A = = = … where i A is the matrix found by replacing the ith column of A with b. Proof : The proof to this is actually pretty simple. First, because we know that A is invertible then we know that the inverse exists and that ( ) det 0 A ≠ . We also know that the solution to the system can be given by, 1 A− = x b From the section on cofactors we know how to define the inverse in terms of the adjoint of A. Using this gives us, ( ) ( ) ( ) 11 21 1 1 12 22 2 2 1 2 1 1 adj det det n n n n nn n C C C b C C C b A A A C C C b ⎡ ⎤⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ = = ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ x b " " # # % # # " Recall that i j C is the cofactor of i j a . Also note that the subscripts on the cofactors above appear to be backwards but they are correctly placed. Recall that we get the adjoint by first forming a matrix with i j C in the ith row and jth column and then taking the transpose to get the adjoint. Now, multiply out the matrices to get, ( ) 1 11 2 21 1 12 2 22 2 1 1 2 2 1 det n nn n n n n n nn b C b C b C b C b C b C A b C b C b C + + ⎡ ⎤ ⎢ ⎥ + + ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ + + ⎢ ⎥ ⎣ ⎦ x " " # " The entry in the ith row of x, which is i x in the solution, is ( ) 1 1 2 2 det i i n ni i bC b C b C x A + + = " Linear Algebra © 2007 Paul Dawkins 123 Next let’s define, 11 12 1 1 1 1 1 1 21 22 2 1 2 2 1 2 1 2 1 1 i i n i i n i n n n i n n i nn a a a b a a a a a b a a A a a a b a a − + − + − + ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ " " " " # # # # # # " " So, i A is the matrix we get by replacing the ith column of A with b. Now, if we were to compute the determinate of i A by expanding along the ith column the products would be one of the i b ’s times the appropriate cofactor. Notice however that since the only difference between i A and A is the ith column and so the cofactors of we get by expanding i A along the ith column will be exactly the same as the cofactors we would get by expanding A along the ith column. Therefore, the determinant of i A is given be, ( ) 1 1 2 2 det i i i n ni A b C b C b C = + +" where k i C is the cofactor of k i a from the matrix A. Note however that this is exactly the numerator of i x and so we have, ( ) ( ) det det i i A x A = as we wanted to prove. Let’s work a quick example to illustrate the method. Example 1 Use Cramer’s Rule to determine the solution to the following system of equations. 1 2 3 1 2 3 1 2 3 3 5 2 4 7 10 2 4 3 x x x x x x x x x − + = − − + + = + − = Solution First let’s put the system into matrix form and verify that the coefficient matrix is invertible. 1 2 3 3 1 5 2 4 1 7 10 2 4 1 3 x x x A − − ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − ⎣ ⎦⎣ ⎦ ⎣ ⎦ = x b ( ) det 187 0 A = − ≠ So, the coefficient matrix is invertible and Cramer’s Rule can be used on the system. We’ll also need det(A) in a bit so it’s good that we now have it. Let’s now write down the formulas for the solution to this system. Linear Algebra © 2007 Paul Dawkins 124 ( ) ( ) ( ) ( ) ( ) ( ) 1 2 3 1 2 3 det det det det det det A A A x x x A A A = = = where 1 A is the matrix formed by replacing the 1st column of A with b, 2 A is the matrix formed by replacing the 2nd column of A with b, and 3 A is the matrix formed by replacing the 3rd column of A with b. We’ll leave it to you to verify the following determinants. ( ) ( ) ( ) 1 2 3 2 1 5 det 10 1 7 212 3 4 1 3 2 5 det 4 10 7 273 2 3 1 3 1 2 det 4 1 10 107 2 4 3 A A A − − = = − − = − = − − − − = − = − The solution to the system is then, 1 2 3 212 273 107 187 187 187 x x x = − = = Now, the solution to this system had some somewhat messy solutions and that would have made the row reduction method prone to mistake. However, since this solution required us to compute 4 determinants as you can see if your system gets too large this would be a very time consuming method to use. For example a system with 5 equations and 5 unknowns would require us to compute 6 5 5 × determinants. At that point, regardless of how messy the final answers are there is a good chance that the row reduction method would be easier. Linear Algebra © 2007 Paul Dawkins 125 Euclidean n­Space Introduction In this chapter we are going to start looking at the idea of a vector and the ultimate goal of this chapter will be to define something called Euclidean n-space. In this chapter we’ll be looking at some very specific examples of vectors so we can build up some of the ideas that surround them. We will reserve general vectors for the next chapter. We will also be taking a quick look at the topic of linear transformations. Linear transformations are a very important idea in the study of Linear Algebra. Here is a listing of the topics in this chapter. Vectors – In this section we’ll introduce vectors in 2-space and 3-space as well as some of the important ideas about them. Dot Product & Cross Product – Here we’ll look at the dot product and the cross product, two important products for vectors. We’ll also take a look at an application of the dot product. Euclidean n-Space – We’ll introduce the idea of Euclidean n-space in this section and extend many of the ideas of the previous two sections. Linear Transformations – In this section we’ll introduce the topic of linear transformations and look at many of their properties. Examples of Linear Transformations – We’ll take a look at quite a few examples of linear transformations in this section. Linear Algebra © 2007 Paul Dawkins 126 Vectors In this section we’re going to start taking a look at vectors in 2-space (normal two dimensional space) and 3-space (normal three dimensional space). Later in this chapter we’ll be expanding the ideas here to n-space and we’ll be looking at a much more general definition of a vector in the next chapter. However, if we start in 2-space and 3-space we’ll be able to use a geometric interpretation that may help understand some of the concepts we’re going to be looking at. So, let’s start off with defining a vector in 2-space or 3-space. A vector can be represented geometrically by a directed line segment that starts at a point A, called the initial point, and ends at a point B, called the terminal point. Below is an example of a vector in 2-space. Vectors are typically denoted with a boldface lower case letter. For instance we could represent the vector above by v, w, a, or b, etc. Also when we’ve explicitly given the initial and terminal points we will often represent the vector as, AB = v JJJ G where the positioning of the upper case letters is important. The A is the initial point and so is listed first while the terminal point, B, is listed second. As we can see in the figure of the vector shown above a vector imparts two pieces of information. A vector will have a direction and a magnitude (the length of the directed line segment). Two vectors with the same magnitude but different directions are different vectors and likewise two vectors with the same direction but different magnitude are different. Vectors with the same direction and same magnitude are called equivalent and even though they may have different initial and terminal points we think of them as equal and so if v and u are two equivalent vectors we will write, = v u To illustrate this idea all of the vectors in the image below (all in 2-space) are equivalent since they have the same direction and magnitude. Linear Algebra © 2007 Paul Dawkins 127 It is often difficult to really visualize a vector without a frame of reference and so we will often introduce a coordinate system to the picture. For example, in 2-space, suppose that v is any vector whose initial point is at the origin of the rectangular coordinate system and its terminal point is at the coordinates ( ) 1 2 , v v as shown below. In these cases we call the coordinates of the terminal point the components of v and write, ( ) 1 2 , v v = v We can do a similar thing for vectors in 3-space. Before we get into that however, let’s make sure that you’re familiar with all the concepts we might run across in dealing with 3-space. Below is a point in 3-space. Linear Algebra © 2007 Paul Dawkins 128 Just as a point in 2-space is described by a pair ( ) , x y we describe a point in 3-space by a triple ( ) , , x y z . Next if we take each pair of coordinate axes and look at the plane they form we call these the coordinate planes and denote them as xy-plane, yz-plane, and xz-plane respectively. Also note that if we take the general point and move it straight into one of the coordinate planes we get a new point where one of the coordinates is zero. For instance in the xy-plane we have the point ( ) , ,0 x y , etc. Just as in 2-space, suppose that we’ve got a vector v whose initial point is the origin of the coordinate system and whose terminal point is given by ( ) 1 2 3 , , v v v as shown below, Just as in 2-space we call ( ) 1 2 3 , , v v v the components of v and write, ( ) 1 2 3 , , v v v = v Linear Algebra © 2007 Paul Dawkins 129 Before proceeding any further we should briefly talk about the notation we’re using because it can be confusing sometimes. We are using the notation ( ) 1 2 3 , , v v v to represent both a point in 3-space and a vector in 3-space as shown in the figure above. This is something you’ll need to get used to. In this class ( ) 1 2 3 , , v v v can be either a point or a vector and we’ll need to be careful and pay attention to the context of the problem, although in many problems it won’t really matter. We’ll be able to use it as a point or a vector as we need to. The same comment could be made for points/vectors in 2-space. Now, let’s get back to the discussion at hand and notice that the component form of the vector is really telling how to get from the initial point of the vector to the terminal point of the vector. For example, lets suppose that ( ) 1 2 , v v = v is a vector in 2-space with initial point ( ) 1 1 , A x y = . The first component of the vector, 1 v , is the amount we have to move to the right (if 1 v is positive) or to the left (if 1 v is negative). The second component tells us how much to move up or down depending on the sign of 2 v . The terminal point of v is then given by, ( ) 1 1 1 2 , B x v y v = + + Likewise if ( ) 1 2 3 , , v v v = v is a vector in 2-space with initial point ( ) 1 1 1 , , A x y z = the terminal point is given by, ( ) 1 1 1 2 1 3 , , B x v y v z v = + + + Notice as well that if the initial point is the origin then the final point will be ( ) 1 2 3 , , B v v v = and we once again see that ( ) 1 2 3 , , v v v can represent both a point and a vector. This can all be turned around as well. Let’s suppose that we’ve got two points in 2-space, ( ) 1 1 , A x y = and ( ) 2 2 , B x y = . Then the vector with initial point A and terminal point B is given by, ( ) 2 1 2 1 , AB x x y y = − − JJJ G Note that the order of the points is important. The components are found by subtracting the coordinates of the initial point from the coordinates of the terminal point. If we turned this around and wanted the vector with initial point B and terminal point A we’d have, ( ) 1 2 1 2 , BA x x y y = − − JJJ G Of course we can also do this in 3-space. Suppose that we want the vector that has an initial point of ( ) 1 1 1 , , A x y z = and a terminal point of ( ) 2 2 2 , , B x y z = . This vector is given by, ( ) 2 1 2 1 2 1 , , AB x x y y z z = − − − JJJ G Let’s see an example of this. Example 1 Find the vector that starts at ( ) 4, 2,9 A = − and ends at ( ) 7,0,6 B = − . Linear Algebra © 2007 Paul Dawkins 130 Solution There really isn’t much to do here other than use the formula above. ( ) ( ) ( ) 7 4,0 2 ,6 9 11,2, 3 AB = = −− −− − = − − v JJJ G Here is a sketch showing the points and the vector. Okay, it’s now time to move into arithmetic of vectors. For each operation we’ll look at both a geometric and a component interpretation. The geometric interpretation will help with understanding just what the operation is doing and the component interpretation will help us to actually do the operation. There are two quick topics that we first need to address in vector arithmetic. The first is the zero vector. The zero vector, denoted by 0, is a vector with no length. Because the zero vector has no length it is hard to talk about its direction so by convention we say that the zero vector can have any direction that we need for it to have in a given problem. The next quick topic to discuss is that of negative of a vector. If v is a vector then the negative of the vector, denoted by –v, is defined to be the vector with the same length as v but has the opposite direction as v as shown below. We’ll see how to compute the negative vector in a bit. Also note that sometimes the negative is called the additive inverse of the vector v. Okay let’s start off the arithmetic with addition. Linear Algebra © 2007 Paul Dawkins 131 Definition 1 Suppose that v and w are two vectors then to find the sum of the two vectors, denoted + v w , we position w so that its initial point coincides with the terminal point of v. The new vector whose initial point is the initial point of v and whose terminal point is the terminal point of w will be the sum of the two vectors, or + v w . Below are three sketches of what we’ve got here with addition of vectors in 2-space. In terms of components we have ( ) 1 2 , v v = v and ( ) 1 2 , w w = w . The sketch on the left matches the definition above. We first sketch in v and the sketch w starting where v left off. The resultant vector is then the sum. In the middle we have the sketch for + w v and as we can see we get exactly the same resultant vector. From this we can see that we will have, + = + v w w v The sketch on the right merges the first two sketches into one and also adds in the components for each of the vectors. It’s a little “busy”, but you can see that the coordinates of the sum are ( ) 1 1 2 2 , v w v w + + . Therefore, for the vectors in 2-space we can compute the sum of two vectors using the following formula. ( ) 1 1 2 2 , v w v w + = + + v w Likewise, if we have two vectors in 3-space, say ( ) 1 2 3 , , v v v = v and ( ) 1 2 3 , , w w w = w , then we’ll have, ( ) 1 1 2 2 3 3 , , v w v w v w + = + + + v w Now that we’ve got addition and the negative of a vector out of the way we can do subtraction. Definition 2 Suppose that we have two vectors v and w then the difference of w from v, denoted by − v w is defined to be, ( ) − = + − v w v w If we make a sketch, in 2-space, for the summation form of the difference we the following sketch. Linear Algebra © 2007 Paul Dawkins 132 Now, while this sketch shows us what the vector for the difference is as a summation we generally like to have a sketch that relates to the two original vectors and not one of the vectors and the negative of the other. We can do this by recalling that any two vectors are equal if the have the same magnitude and direction. Upon recalling this we can pick up the vector representing the difference and moving it as show below. Finally, if we were to go back to the original sketch add in components for the vectors we will see that in 2-space we can compute the difference as follows, ( ) 1 1 2 2 , v w v w − = − − v w and if the vectors are in 3-space the difference is, ( ) 1 1 2 2 3 3 , , v w v w v w − = − − − v w Note that both addition and subtraction will extend naturally to more than two vectors. The final arithmetic operation that we want to take a look at is scalar multiples. Definition 3 Suppose that v is a vector and c is a non-zero scalar (i.e. c is a number) then the scalar multiple, cv, is the vector whose length is c times the length of v and is in the direction of v if c is positive and in the opposite direction of v is c is negative. Here is a sketch of some scalar multiples of a vector v. Linear Algebra © 2007 Paul Dawkins 133 Note that we can see from this that scalar multiples are parallel. In fact it can be shown that if v and w are two parallel vectors then there is a non-zero scalar c such that c = v w , or in other words the two vectors will be scalar multiples of each other. It can also be shown that if v is a vector in either 2-space or 3-space then the scalar multiple can be computed as follows, ( ) ( ) 1 2 1 2 3 , OR , , c cv cv c cv cv cv = = v v At this point we can give a formula for the negative of a vector. Let’s examine the scalar multiple, ( ) 1 − v . This is a vector whose length is the same as v since 1 1 − = and is in the opposite direction of v since the scalar is negative. Hence this vector represents the negative of v. In 3-space this gives, ( ) ( ) 1 2 3 1 , , v v v − = − = − − − v v and in 2-space we’ll have, ( ) ( ) 1 2 1 , v v − = − = − − v v Before we move on to an example let’s get some properties of vector arithmetic written down. Theorem 1 If u, v, and w are vectors in 2-space or 3-space and c and k are scalars then, (a) + = + u v v u (b) ( ) ( ) + + = + + u v w u v w (c) + = + = u 0 0 u u (d) ( ) − = + − = u u u u 0 (e) 1 = u u (f) ( ) ( ) ( ) ck c k k c = = u u u (g) ( ) c k c k + = + u u u (h) ( ) c c c + = + u v u v Linear Algebra © 2007 Paul Dawkins 134 The proof of all these comes directly from the component definition of the operations and so are left to you to verify. At this point we should probably do a couple of examples of vector arithmetic to say that we’ve done that. Example 2 Given the following vectors compute the indicated quantity. ( ) ( ) ( ) ( ) ( ) ( ) 4, 6 3, 7 1,5 1, 2,6 0,4, 1 9,2, 3 = − = − − = − = − = − = − a b c u v w (a) −w [Solution] (b) + a b [Solution] (c) − a c [Solution] (d) 3 10 − + a b c [Solution] (e) 4 2 + − u v w [Solution] Solution There really isn’t too much to these other than to compute the scalar multiples and the do the addition and/or subtraction. For the first three we’ll include sketches so you can visualize what’s going on with each operation. (a) −w ( ) 9, 2,3 − = − − w Here is a sketch of this vector as well as w. [Return to Problems] (b) + a b ( ) ( ) ( ) ( ) 4 3 , 6 7 1, 13 + = + − −+ − = − a b Here is a sketch of a and b as well as the sum. Linear Algebra © 2007 Paul Dawkins 135 [Return to Problems] (c) − a c ( ) ( ) ( ) 4 1 , 6 5 5, 11 − = −− −− = − a c Here is a sketch of a and c as well as the difference [Return to Problems] (d) 3 10 − + a b c ( ) ( ) ( ) ( ) 3 10 4, 6 9, 21 10 50 3,65 − + = − −− − + − + = a b c [Return to Problems] (e) 4 2 + − u v w ( ) ( ) ( ) ( ) 4 2 4, 8,24 0,4, 1 18,4, 6 14, 8,29 + − = − + − − − = − − u v w [Return to Problems] There is one final topic that we need to discus in this section. We are often interested in the length or magnitude of a vector so we’ve got a name and notation to use when we’re talking about the magnitude of a vector. Linear Algebra © 2007 Paul Dawkins 136 Definition 4 If v is a vector then the magnitude of the vector is called the norm of the vector and denoted by v . Furthermore, if v is a vector in 2-space then, 2 2 1 2 v v = + v and if v is in 3-space we have, 2 2 2 1 2 3 v v v = + + v In the 2-space case the formula is fairly easy to see from a geometric perspective. Let’s suppose that we have ( ) 1 2 , v v = v and we want to find the magnitude (or length) of this vector. Let’s consider the following sketch of the vector. Since we know that the components of v are also the coordinates of the terminal point of the vector when its initial point is the origin (as it is here) we know then the lengths of the sides of a right triangle as shown. Then using the Pythagorean Theorem we can find the length of the hypotenuse, but that is also the length of the vector. A similar argument can be done on the 3-space version. From above we know that cv is a scalar multiple of v and that its length is |c| times the length of v and so we have, c c = v v We can also get this from the definition of the norm. Here is the 3-space case, the 2-space argument is identical. ( ) ( ) ( ) ( ) 2 2 2 2 2 2 2 2 2 2 1 2 3 1 2 3 1 2 3 c cv cv cv c v v v c v v v c = + + = + + = + + = v v There is one norm that we’ll be particularly interested in on occasion. Suppose v is a vector in 2-space or 3-space. We call v a unit vector if 1 = v . Let’s compute a couple of norms. Linear Algebra © 2007 Paul Dawkins 137 Example 3 Compute the norms of the given vectors. (a) ( ) 5,3,9 = − v (b) ( ) 0,1,0 = j (c) ( ) 3, 4 = − w and 1 5 w Solution Not much to do with these other than to use the formula. (a) ( ) 2 2 2 5 3 9 115 = − + + = v (b) 2 2 2 0 1 0 1 1 = + + = = j , so j is a unit vector! (c) Okay with this one we’ve got two norms to compute. Here is the first one. ( ) 2 2 3 4 25 5 = + − = = w To get the second we’ll first need, 1 3 4 , 5 5 5 ⎛ ⎞ = − ⎜ ⎟ ⎝ ⎠ w and here is the norm using the fact that c c = v v . ( ) 1 1 1 5 1 5 5 5 ⎛ ⎞ = = = ⎜ ⎟ ⎝ ⎠ w w As a check let’s also compute this using the formula for the norm. 2 2 1 3 4 9 16 25 1 5 5 5 25 25 25 ⎛ ⎞ ⎛ ⎞ = + − = + = = ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ w Both methods get the same answer as they should. Notice as well that w is not a unit vector but 1 5 w is a unit vector. We now need to take a look at a couple facts about the norm of a vector. Theorem 2 Given a vector v in 2-space or 3-space then 0 ≥ v . Also, 0 = v if and only if = v 0. Proof : The proof of the first part of this comes directly from the definition of the norm. The norm is defined to be a square root and by convention the value of a square root is always greater than or equal to zero and so a norm will always be greater than or equal to zero. Now, for the second part, recall that when we say “if and only if” in a theorem statement we’re saying that this is kind of a two way street. This statement is saying that if 0 = v then we must also have = v 0 and in the reverse it’s also saying that if = v 0 then we must also have 0 = v . Linear Algebra © 2007 Paul Dawkins 138 To prove this we need to make each assumption and then prove that this will imply the other portion of the statement. We’re only going to show the proof for the case where v is in 2-space. The proof for in 3-space is identical. So, assume that ( ) 1 2 , v v = v and let’s start the proof by assuming that 0 = v . Plugging into the formula for the norm gives, 2 2 2 2 1 2 1 2 0 0 v v v v = + ⇒ + = As shown, the only way we’ll get zero out of a square root is if the quantity under the radical is zero. Now at this point we’ve got a sum of squares equaling zero. The only way this will happen is if the individual terms are zero. So, this means that, ( ) 1 2 0 & 0 0,0 v v = = ⇒ = = v 0 So, if 0 = v we must have = v 0. Next, let’s assume that = v 0. In this case simply plug the components into the formula for the norm and a quick computation will show that 0 = v and so we’re done. Theorem 3 Given a non-zero vector v in 2-space or 3-space define a new vector 1 = u v v , then u is a unit vector. Proof : This is a really simple proof, just notice that u is a scalar multiple of v and take the norm of u. 1 = u v v Now we know that 0 > v because norms are always greater than or equal to zero, but will only be zero if we have the zero vector. In this case we’ve explicitly assumed that we don’t have the zero vector and so we now the norm will be strictly positive and this will allow us to drop the absolute value bars on the norm when we do the computation. We can now do the following, 1 1 1 1 = = = = u v v v v v v So, u is a unit vector. This theorem tells us that we can always turn a non-zero vector into a unit vector simply be dividing by the norm. Note as well that because all we’re doing to compute this new unit vector is scalar multiplication by a positive number this new unit vector will point in the same direction as the original vector. Linear Algebra © 2007 Paul Dawkins 139 Example 4 Given ( ) 3, 1, 2 = −− v find a unit vector that, (a) points in the same direction as v (b) points in the opposite direction as v Solution (a) Now, as pointed out after the proof of the previous theorem, the unit vector computed in the theorem will point in the same direction as v so all we need to do is compute the norm of v and then use the theorem to find a unit vector that will point in the same direction as v. ( ) ( ) 2 2 2 3 1 2 14 = + − + − = v ( ) 1 3 1 2 3, 1, 2 , , 14 14 14 14 ⎛ ⎞ = −− = − − ⎜ ⎟ ⎝ ⎠ u (b) We’ve done most of the work for this one. Since u is a unit vector that points in the same direction as v then its negative will be a unit vector that points in the opposite directions as v. So, here is the negative of u. 3 1 2 , , 14 14 14 ⎛ ⎞ −= − ⎜ ⎟ ⎝ ⎠ u Finally, here is a sketch of all three of these vectors. Linear Algebra © 2007 Paul Dawkins 140 Dot Product & Cross Product In this section we’re going to be taking a look at two special products of vectors, the dot product and the cross product. However, before we look at either on of them we need to get a quick definition out of the way. Suppose that u and v are two vectors in 2-space or 3-space that are placed so that their initial points are the same. Then the angle between u and v is angle θ that is formed by u and v such that 0 θ π ≤ ≤ . Below are some examples of angles between vectors. Notice that there are always two angles that are formed by the two vectors and the one that we will always chose is the one that satisfies 0 θ π ≤ ≤ . We’ll be using this angle with both products. So, let’s get started by taking a look at the dot product. Of the two products we’ll be looking at in this section this is the one we’re going to run across most often in later sections. We’ll start with the definition. Definition 1 If u and v are two vectors in 2-space or 3-space and θ is the angle between them then the dot product, denoted by u v i is defined as, cosθ = u v u v i Note that the dot product is sometimes called the scalar product or the Euclidean inner product. Let’s see a quick example or two of the dot product. Example 1 Compute the dot product for the following pairs of vectors. (a) ( ) 0,0,3 = u and ( ) 2,0,2 = v which makes the angle between them 45°. (b) ( ) 0,2, 1 = − u and ( ) 1,1,2 = − v which makes the angle between them 90° . Solution For reference purposes here is a sketch of the two sets of vectors. Linear Algebra © 2007 Paul Dawkins 141 (a) There really isn’t too much to do here with this problem. ( )( ) ( ) 0 0 9 3 4 0 4 8 2 2 2 3 2 2 cos 45 6 2 6 2 = + + = = + + = = ⎛ ⎞ = = = ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ u v u v i (b) Nor is there a lot of work to do here. ( )( ) ( ) ( ) 0 4 1 5 1 1 4 6 5 6 cos 90 30 0 0 = + + = = + + = = = = u v u v i Now, there should be a question in everyone’s mind at this point. Just how did we arrive at those angles above? They are the correct angles, but just how did we get them? That is the problem with this definition of the dot product. If you don’t have the angles between two vectors you can’t easily compute the dot product and sometimes finding the correct angles is not the easiest thing to do. Fortunately, there is another formula that we can use to compute the formula that relies only on the components of the vectors and not the angle between them. Theorem 1 Suppose that ( ) 1 2 3 , , u u u = u and ( ) 1 2 3 , , v v v = v are two vectors in 3-space then, 1 1 2 2 3 3 u v u v u v = + + u v i Likewise, if ( ) 1 2 , u u = u and ( ) 1 2 , v v = v are two vectors in 2-space then, 1 1 2 2 u v u v = + u v i Proof : We’ll just prove the 3-space version of this theorem. The 2-space version has a similar proof. Let’s start out with the following figure. Linear Algebra © 2007 Paul Dawkins 142 So, these three vectors form a triangle and the lengths of each side is u , v , and − v u . Now, from the Law of Cosines we know that, 2 2 2 2 cosθ − = + − v u v u v u Now, plug in the definition of the dot product and solve for u v i . ( ) 2 2 2 2 − = + − v u v u u v i ( ) 2 2 2 1 2 = + − − u v v u v u i (1) Next, we know that ( ) 1 1 2 2 3 3 , , v u v u v u − = − − − v u and so we can compute 2 − v u . Note as well that because of the square on the norm we won’t have a square root. We’ll also do all of the multiplications. ( ) ( ) ( ) ( ) 2 2 2 2 1 1 2 2 3 3 2 2 2 2 2 2 1 1 1 1 2 2 2 2 3 3 3 3 2 2 2 2 2 2 1 2 3 1 2 3 1 1 2 2 3 3 2 2 2 2 v u v u v u v v u u v v u u v v u u v v v u u u v u v u v u − = − + − + − = − + + − + + − + = + + + + + − + + v u The first three terms of this are nothing more than the formula for 2 v and the next three terms are the formula for 2 u . So, let’s plug this into (1). ( ) ( ) ( ) ( ) ( ) 2 2 2 2 1 1 2 2 3 3 1 1 2 2 3 3 1 1 2 2 3 3 1 2 2 1 2 2 v u v u v u v u v u v u v u v u v u = + − + − + + = + + = + + u v v u v u i And we’re done with the proof. Before we work an example using this new (easier to use) formula let’s notice that if we rewrite the definition of the dot product as follows, Linear Algebra © 2007 Paul Dawkins 143 cos , 0 θ θ π = ≤ ≤ u v u v i we now have a very easy way to determine the angle between any two vectors. In fact this is how we got the angles between the vectors in the first example! Example 2 Determine the angle between the following pairs of vectors. (a) ( ) 9, 2 = − a ( ) 4,18 = b (b) ( ) 3, 1,6 = − u ( ) 4,2,0 = v Solution (a) Here are all the important quantities for this problem. ( )( ) ( )( ) 85 340 9 4 2 18 0 = = = + − = a b a b i The angle is then, 0 cos 0 90 85 340 θ θ = = ⇒ = ° (b) The important quantities for this part are, ( )( ) ( )( ) ( )( ) 46 20 3 4 1 2 6 0 10 = = = + − + = u v u v i The angle is then, 10 cos 0.3296902 70.75 46 20 θ θ = = ⇒ = ° Note that we did need to use a calculator to get this result. Twice now we’ve seen two vectors whose dot product is zero and in both cases we’ve seen that the angle between them was 90° and so the two vectors in question each time where perpendicular. Perpendicular vectors are called orthogonal and as we’ll see on occasion we often want to know if two vectors are orthogonal. The following theorem will give us a nice check for this. Theorem 2 Two non-zero vectors, u and v, are orthogonal if and only if 0 = u v i . Proof : First suppose that u and v are orthogonal. This means that the angle between them is 90° and so from the definition of the dot product we have, ( ) ( ) cos 90 0 0 = = = u v u v u v i and so we have 0 = u v i . Next suppose that 0 = u v i , then from the definition of the dot product we have, 0 cos cos 0 90 θ θ θ = = ⇒ = ⇒ = ° u v u v i and so the two vectors are orthogonal. Note that we used the fact that the two vectors are non-zero, and hence would have non-zero magnitudes, in determining that we must have cos 0 θ = . Linear Algebra © 2007 Paul Dawkins 144 If we take the convention that the zero vector is orthogonal to any other vector we can say that for any two vectors u and v they will be orthogonal provided 0 = u v i . Using this convention means we don’t need to worry about whether or not we have zero vectors. Here are some nice properties about the dot product. Theorem 3 Suppose that u, v, and w are three vectors that are all in 2-space or all in 3-space and that c is a scalar. Then, (a) 2 = v v v i (this implies that ( ) 1 2 = v v v i ) (b) = u v v u i i (c) ( ) + = + u v w u v u w i i i (d) ( ) ( ) ( ) c c c = = u v u v u v i i i (e) 0 > v v i if ≠ v 0 (f) 0 = v v i if and only if v = 0 We’ll prove the first couple and leave the rest to you to prove since the follow pretty much from either the definition of the dot product or the formula from Theorem 2. The proof of the last one is nearly identical to the proof of Theorem 2 in the previous section. Proof : (a) The angle between v and v is 0 since they are the same vector and so by the definition of the dot product we’ve got. ( ) 2 cos 0 = = v v v v v i To get the second part just take the square root of both sides. (b) This proof is going to seem tricky but it’s really not that bad. Let’s just look at the 3-space case. So, ( ) 1 2 3 , , u u u = u and ( ) 1 2 3 , , v v v = v and the dot product u v i is 1 1 2 2 3 3 u v u v u v = + + u v i We can also compute v u i as follows, 1 1 2 2 3 3 v u v u v u = + + v u i However, since 1 1 1 1 u v v u = , etc. (they are just real numbers after all) these are identical and so we’ve got = u v v u i i . Example 3 Given ( ) 5, 2 = − u , ( ) 0,7 = v and ( ) 4,10 = w compute the following. (a) u u i and 2 u (b) u w i (c) ( ) 2 −u v i and ( ) 2 − u v i Solution (a) Okay, in this one we’ll be verifying part (a) of the previous theorem. Note as well that because the norm is squared we’ll not need to have the square root in the computation. Here are the computations for this part. Linear Algebra © 2007 Paul Dawkins 145 ( )( ) ( )( ) ( ) 2 2 2 5 5 2 2 25 4 29 5 2 29 = + − − = + = = + − = u u u i So, as the theorem suggested we do have 2 = u u u i . (b) Here’s the dot product for this part. ( )( ) ( )( ) 5 4 2 10 0 = + − = u w i So, it looks like u and w are orthogonal. (c) In this part we’ll be verifying part (d) of the previous theorem. Here are the computations for this part. ( ) ( ) 2 10,4 2 0, 14 − = − − = − u v ( ) ( )( ) ( )( ) ( ) ( )( ) ( )( ) 2 10 0 4 7 28 2 5 0 2 14 28 − = − + = − = + − − = u v u v i i Again, we got the result that we should expect . We now need to take a look at a very important application of the dot product. Let’s suppose that u and a are two vectors in 2-space or 3-space and let’s suppose that they are positioned so that their initial points are the same. What we want to do is “decompose” the vector u into two components. One, which we’ll denote 1 v for now, will be parallel to the vector a and the other, denoted 2 v for now, will be orthogonal to a. See the image below to see some examples of kind of decomposition. From these figures we can see how to actually construct the two pieces of our decomposition. Starting at u we drop a line straight down until it intersects a (or the line defined by a as in the second case). The parallel vector 1 v is then the vector that starts at the initial point of u and end there the perpendicular line intersects a. Finding 2 v is actually really simple provided we first have 1 v . From the image we can see that we have, 1 2 2 1 + = ⇒ = − v v u v u v We now need to get some terminology and notation out of the way. The parallel vector, 1 v , is called the orthogonal projection of u on a and is denoted by proja u . Note that sometimes proja u is called the vector component of u along a. The orthogonal vector, 2 v , is called the vector component of u orthogonal to a. Linear Algebra © 2007 Paul Dawkins 146 The following theorem gives us formulas for computing both of these vectors. Theorem 4 Suppose that u and ≠ a 0 are both vectors in 2-space or 3-space then, 2 proj = a u a u a a i and the vector component of u orthogonal to a is given by, 2 proj − = − a u a u u u a a i Proof : First let 1 proj = a v u then proj − a u u will be the vector component of u orthogonal to a and so all we need to do is show the formula for 1 v is what we claimed it to be. To do this let’s first note that since 1 v is parallel to a then it must be a scalar multiple of a since we know from last section that parallel vectors are scalar multiples of each other. Therefore there is a scalar c such that 1 c = v a . Now, let’s start with the following, 1 2 2 c = + = + u v v a v Next take the dot product of both sides with a and distribute the dot product through the parenthesis. ( ) 2 2 c c = + = + u a a v a a a v a i i i i Now, 2 = a a a i and 2 0 = v a i because 2 v is orthogonal to a. Therefore this reduces to, 2 2 c c = ⇒ = u a u a a a i i and so we get, 1 2 proj = = a u a v u a a i We can also take a quick norm of proja u to get a nice formula for the magnitude of the orthogonal projection of u on a. 2 2 proj = = = a u a u a u a u a a a a a i i i Let’s work a quick example or two of orthogonal projections. Linear Algebra © 2007 Paul Dawkins 147 Example 4 Compute the orthogonal projection of u on a and the vector component of u orthogonal to a for each of the following. (a) ( ) 3,1 = − u ( ) 7,2 = a (b) ( ) 4,0, 1 = − u ( ) 3,1, 5 = − a Solution There really isn’t much to do here other than to plug into the formulas so we’ll leave it to you to verify the details. (a) First, 2 19 53 = − = u a a i Now the orthogonal projection of u on a is, ( ) 19 133 38 proj 7,2 , 53 53 53 − ⎛ ⎞ = = − − ⎜ ⎟ ⎝ ⎠ a u and the vector component of u orthogonal to a is, ( ) 133 38 26 91 proj 3,1 , , 53 53 53 53 ⎛ ⎞ ⎛ ⎞ − = − −− − = − ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ a u u (b) First, 2 17 35 = = u a a i Now the orthogonal projection of u on a is, ( ) 17 51 17 17 proj 3,1, 5 , , 35 35 35 7 ⎛ ⎞ = − = − ⎜ ⎟ ⎝ ⎠ a u and the vector component of u orthogonal to a is, ( ) 51 17 17 89 17 10 proj 4,0, 1 , , , , 35 35 7 35 35 7 ⎛ ⎞ ⎛ ⎞ − = − − − = − ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ a u u We need to be very careful with the notationproja u . In this notation we are looking for the orthogonal projection of u (the second vector listed) on a (the vector that is subscripted). Let’s do a quick example illustrating this. Example 5 Given ( ) 4, 5 = − u and ( ) 1, 1 = − a compute, (a) proja u (b) proju a Solution (a) In this case we are looking for the component of u that is parallel to a and so the orthogonal projection is given by, 2 proj = a u a u a a i so let’s get all the quantities that we need. ( )( ) ( )( ) ( ) ( ) 2 2 2 4 1 5 1 9 1 1 2 = + − − = = + − = u a a i The projection is then, Linear Algebra © 2007 Paul Dawkins 148 ( ) 9 9 9 proj 1, 1 , 2 2 2 ⎛ ⎞ = − = − ⎜ ⎟ ⎝ ⎠ a u (b) Here we are looking for the component of a that is parallel to u and so the orthogonal projection is given by, 2 proj = u a u a u u i so let’s get the quantities that we need for this part. ( ) ( ) 2 2 2 9 4 5 41 = = = + − = a u u a u i i The projection is then, ( ) 9 36 45 proj 4, 5 , 41 41 41 ⎛ ⎞ = − = − ⎜ ⎟ ⎝ ⎠ u a As this example has shown we need to pay attention to the placement of the two vectors in the projection notation. Each part above was asking for something different and as shown we did in fact get different answers so be careful. It’s now time to move into the second vector product that we’re going to look at in this section. However before we do that we need to introduce the idea of the standard unit vectors or standard basis vectors for 3-space. These vectors are defined as follows, ( ) ( ) ( ) 1,0,0 0,1,0 0,0,1 = = = i j k Each of these have a magnitude of 1 and so are unit vectors. Also note that each one lies along the coordinate axes of 3-space and point in the positive direction as shown below. Notice that any vector in 3-space, say ( ) 1 2 3 , , u u u = u , can be written in terms of these three vectors as follows, ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 2 3 1 2 3 1 2 3 1 2 3 , , ,0,0 0, ,0 0,0, 1,0,0 0,1,0 0,0,1 u u u u u u u u u u u u = = + + = + + = + + u i j k So, for example we can do the following, ( ) ( ) 10,4,3 10 4 3 1,0,2 2 − = − + + − = −+ i j k i k Linear Algebra © 2007 Paul Dawkins 149 Also note that if we define ( ) 1,0 = i and ( ) 0,1 = j these two vectors are the standard basis vectors for 2-space and any vector in 2-space, say ( ) 1 2 , u u = u can be written as, ( ) 1 2 1 2 , u u u u = = + u i j We’re not going to need the 2-space version of things here, but it was worth pointing out that there was a 2-space version since we’ll need that down the road. Okay we are now ready to look at the cross product. The first thing that we need to point out here is that, unlike the dot product, this is only valid in 3-space. There are three different ways of defining it depending on how you want to do it. The following definition gives all three definitions. Definition 2 If u and v are two vectors in 3-space then the cross product, denoted by × u v and is defined in one of three ways. (a) ( ) 2 3 3 2 3 1 1 3 1 2 2 1 , , u v u v u v u v u v u v × = − − − u v - Vector Notation. (b) 2 3 1 3 1 2 2 3 1 3 1 2 , , u u u u u u v v v v v v ⎛ ⎞ × = − ⎜ ⎟ ⎝ ⎠ u v - Using 2 2 × determinants (c) 1 2 3 1 2 3 u u u v v v × = i j k u v - Using 3 3 × determinants Note that all three of these definitions are equivalent as you can check be computing the determinants in the second and third definition and verifying that you get the same formula as in the first definition. Notice that the cross product of two vectors is a new vector unlike the dot product which gives a scalar. Make sure to keep these two products straight. Let’s take a quick look at an example of a cross product. Example 6 Compute × u v for ( ) 4, 9,1 = − u and ( ) 3, 2,7 = − v . Solution You can use either of the three definitions above to compute this cross product. We’ll use the third one. If you don’t remember how to compute determinants you might want to go back and check out the first section of the Determinants chapter. In that section you’ll find the formulas for computing determinants of both 2 2 × and 3 3 × matrices. 4 9 1 4 9 3 2 7 3 2 63 3 8 28 2 27 61 25 19 × = − − − − = − + − − + + = − − + i j k i j u v i j k j i k i j k When we’re using this definition of the cross product we’ll always get the answer in terms of the Linear Algebra © 2007 Paul Dawkins 150 standard basis vectors. However, we can always go back to the form we’re used to. Doing this gives, ( ) 61, 25,19 × = − − u v Here is a theorem listing the main properties of the cross product. Theorem 5 Suppose u, v, and w are vectors in 3-space and c is any scalar then (a) ( ) × = − × u v v u (b) ( ) ( ) ( ) × + = × + × u v w u v u w (c) ( ) ( ) ( ) + × = × + × u v w u w v w (d) ( ) ( ) ( ) c c c × = × = × u v u v u v (e) × = × = u 0 0 u 0 (f) × = u u 0 The proof of all these properties come directly from the definition of the cross product and so are left to you to verify. There are also quite a few properties that relate the dot product and the cross product. Here is a theorem giving those properties. Theorem 6 Suppose u, v, and w are vectors in 3-space then, (a) ( ) 0 × = u u v i (b) ( ) 0 × = v u v i (c) ( ) 2 2 2 2 × = − u v u v u v i - This is called Lagrange’s Identity (d) ( ) ( ) ( ) × × = − u v w u w v u v w i i (e) ( ) ( ) ( ) × × = − u v w u w v v w u i i The proof of all these properties come directly from the definition of the cross product and the dot product and so are left to you to verify. The first two properties deserve some closer inspection. That they are saying is that given two vectors u and v in 3-space then the cross product × u v is orthogonal to both u and v. The image below shows this idea. Linear Algebra © 2007 Paul Dawkins 151 As this figure shows there are two directions in which the cross product to be orthogonal to u and v and there is a nice way to determine which it will be. Take your right hand and cup your fingers so that they point in the direction of rotation that is shown in the figures (i.e. rotate u until it lies on top of v) and hold your thumb out. Your thumb will point in the direction of the cross product. This is often called the right-hand rule. Notice that part (a) of Theorem 5 above also gives this same result. If we flip the order in which we take the cross product (which is really what we did in the figure above when we interchanged the letters) we get ( ) × = − × u v v u . In other words, in one order we get a cross product that points in one direction and if we flip the order we get a new cross product that points in the opposite direction as the first one. Let’s work a couple more cross products to verify some of the properties listed above and so we can say we’ve got a couple more examples in the notes. Example 7 Given ( ) 3, 1,4 = − u and ( ) 2,0,1 = v compute each of the following. (a) × u v and × v u [Solution] (b) × u u [Solution] (c) ( ) × u u v i and ( ) × v u v i [Solution] Solution In the solutions to these problems we will be using the third definition above and we’ll be setting up the determinant. We will not be showing the determinant computation however, if you need a reminder on how to take determinants go back to the first section in the Determinant chapter for a refresher. (a) × u v and × v u Let’s compute × u v first. ( ) 3 1 4 5 2 1,5,2 2 0 1 × = − = −+ + = − i j k u v i j k Remember that we’ll get the answers here in terms of the standard basis vectors and these can always be put back into the standard vector notation that we’ve been using to this point as we did above. Now let’s compute × v u . ( ) 2 0 1 5 2 1, 5, 2 3 1 4 × = = − − = − − − i j k v u i j k So, as part (a) of Theorem 5 suggested we got ( ) × = − × u v v u . [Return to Problems] (b) × u u Not much to do here other than do the cross product and note that part (f) of Theorem 5 implies that we should get × = u u 0 . Linear Algebra © 2007 Paul Dawkins 152 ( ) 3 1 4 0,0,0 3 1 4 × = − = − i j k u u So, sure enough we got 0. [Return to Problems] (c) ( ) × u u v i and ( ) × v u v i We’ve already got × u v computed so we just need to do a couple of dot products and according to Theorem 6 both u and v are orthogonal to × u v and so we should get zero out of both of these. ( ) ( )( ) ( )( ) ( )( ) ( ) ( )( ) ( )( ) ( )( ) 3 1 1 5 4 2 0 2 1 0 5 1 2 0 × = − + − + = × = − + + = u u v v u v i i And we did get zero as expected. [Return to Problems] We’ll give one theorem on cross products relating the magnitude of the cross product to the magnitudes of the two vectors we’re taking the cross product of. Theorem 7 Suppose that u and v are vectors in 3-space and let θ be the angle between them then, sinθ × = u v u v Let’s take a look at one final example here. Example 8 Given ( ) 1, 1,0 = − u and ( ) 0, 2,0 = − v verify the results of Theorem 7. Solution Let’s get the cross product and the norms taken care of first. ( ) 1 1 0 0,0, 2 0 0 4 2 0 2 0 × = − = − × = + + = − i j k u v u v 1 1 0 2 0 4 0 2 = + + = = + + = u v Now, in order to verify Theorem 7 we’ll need the angle between the two vectors and we can use the definition of the dot product above to find this. We’ll first need the dot product. ( )( ) 2 1 2 cos 45 2 2 2 θ θ = ⇒ = = ⇒ = ° u v i All that’s left is to check the formula. ( )( ) ( ) ( )( ) 2 sin 2 2 sin 45 2 2 2 2 θ ⎛ ⎞ = ° = = = × ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ u v u v So, the theorem is verified. Linear Algebra © 2007 Paul Dawkins 153 Linear Algebra © 2007 Paul Dawkins 154 Euclidean n­Space In the first two sections of this chapter we looked at vectors in 2-space and 3-space. You probably noticed that with the exception of the cross product (which is only defined in 3-space) all of the formulas that we had for vectors in 3-space were natural extensions of the 2-space formulas. In this section we’re going to extend things out to a much more general setting. We won’t be able to visualize things in a geometric setting as we did in the previous two sections but things will extend out nicely. In fact, that was why we started in 2-space and 3-space. We wanted to start out in a setting where we could visualize some of what was going on before we generalized things into a setting where visualization was a very difficult thing to do. So, let’s get things started off with the following definition. Definition 1 Given a positive integer n an ordered n-tuple is a sequence of n real numbers denoted by ( ) 1 2 , , , n a a a … . The complete set of all ordered n-tuples is called n-space and is denoted by n \ In the previous sections we were looking at 2 \ (what we were calling 2-space) and 3 \ (what we were calling 3-space). Also the more standard terms for 2-tuples and 3-tuples are ordered pair and ordered triplet and that’s the terms we’ll be using from this point on. Also, as we pointed out in the previous sections an ordered pair, ( ) 1 2 , a a , or an ordered triplet, ( ) 1 2 3 , , a a a , can be thought of as either a point or a vector in 2 \ or 3 \ respectively. In general an ordered n-tuple, ( ) 1 2 , , , n a a a … , can also be thought of as a “point” or a vector in n \ . Again, we can’t really visualize a point or a vector in n \ , but we will think of them as points or vectors in n \ anyway and try not to worry too much about the fact that we can’t really visualize them. Next, we need to get the standard arithmetic definitions out of the way and all of these are going to be natural extensions of the arithmetic we saw in 2 \ and 3 \ . Definition 2 Suppose ( ) 1 2 , , , n u u u = u … and ( ) 1 2 , , , n v v v = v … are two vectors in n \ . (a) We say that u and v are equal if, 1 1 2 2 n n u v u v u v = = = " (b) The sum of u and v is defined to be, ( ) 1 1 2 2 , , , n n u v u v u v + = + + + u v … (c) The negative (or additive inverse) of u is defined to be, ( ) 1 2 , , , n u u u −= − − − u … (d) The difference of two vectors is defined to be, ( ) ( ) 1 1 2 2 , , , n n u v u v u v − = + − = − − − u v u v … (e) If c is any scalar then the scalar multiple of u is defined to be, ( ) 1 2 , , , n c cu cu cu = u … (f) The zero vector in n \ is denoted by 0 and is defined to be, ( ) 0,0, ,0 = 0 … Linear Algebra © 2007 Paul Dawkins 155 The basic properties of arithmetic are still valid in n \ so let’s also give those so that we can say that we’ve done that. Theorem 1 Suppose ( ) 1 2 , , , n u u u = u … , ( ) 1 2 , , , n v v v = v … and ( ) 1 2 , , , n w w w = w … are vectors in n \ and c and k are scalars then, (a) + = + u v v u (b) ( ) ( ) + + = + + u v w u v w (c) + = + = u 0 0 u u (d) ( ) − = + − = u u u u 0 (e) 1 = u u (f) ( ) ( ) ( ) ck c k k c = = u u u (g) ( ) c k c k + = + u u u (h) ( ) c c c + = + u v u v The proof of all of these come directly from the definitions above and so won’t be given here. We now need to extend the dot product we saw in the previous section to n \ and we’ll be giving it a new name as well. Definition 3 Suppose ( ) 1 2 , , , n u u u = u … and ( ) 1 2 , , , n v v v = v … are two vectors in n \ then the Euclidean inner product denoted by u v i is defined to be 1 1 2 2 n n u v u v u v = + + + u v i " So, we can see that it’s the same notation and is a natural extension to the dot product that we looked at in the previous section, we’re just going to call it something different now. In fact, this is probably the more correct name for it and we should instead say that we’ve renamed this to the dot product when we were working exclusively in 2 \ and 3 \ . Note that when we add in addition, scalar multiplication and the Euclidean inner product to n \ we will often call this Euclidean n-space. We also have natural extensions of the properties of the dot product that we saw in the previous section. Theorem 2 Suppose ( ) 1 2 , , , n u u u = u … , ( ) 1 2 , , , n v v v = v … , and ( ) 1 2 , , , n w w w = w … are vectors in n \ and let c be a scalar then, (a) = u v v u i i (b) ( ) + = + u v w u w v w i i i (c) ( ) ( ) ( ) c c c = = u v u v u v i i i (d) 0 ≥ u u i (e) 0 = u u i if and only if u=0. Linear Algebra © 2007 Paul Dawkins 156 The proof of this theorem falls directly from the definition of the Euclidean inner product and are extensions of proofs given in the previous section and so aren’t given here. The final extension to the work of the previous sections that we need to do is to give the definition of the norm for vectors in n \ and we’ll use this to define distance in n \ . Definition 4 Suppose ( ) 1 2 , , , n u u u = u … is a vector in n \ then the Euclidean norm is, ( ) 1 2 2 2 2 1 2 n u u u = = + + + u u u i " Definition 5 Suppose ( ) 1 2 , , , n u u u = u … and ( ) 1 2 , , , n v v v = v … are two points in n \ then the Euclidean distance between them is defined to be, ( ) ( ) ( ) ( ) 2 2 2 1 1 2 2 , n n d u v u v u v = − = − + − + + − u v u v " Notice in this definition that we called u and v points and then used them as vectors in the norm. This comes back to the idea that an n-tuple can be thought of as both a point and a vector and so will often be used interchangeably where needed. Let’s take a quick look at a couple of examples. Example 1 Given ( ) 9,3, 4,0,1 = − u and ( ) 0, 3,2, 1,7 = − − v compute (a) 4 − u v (b) v u i (c) u u i (d) u (e) ( ) , d u v Solution There really isn’t much to do here other than use the appropriate definition. (a) ( ) ( ) ( ) ( ) ( ) 4 9,3, 4,0,1 4 0, 3,2, 1,7 9,3, 4,0,1 0, 12,8, 4,28 9,15, 12,4, 27 − = − − − − = − − − − = − − u v (b) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) 0 9 3 3 2 4 1 0 7 1 10 = + − + − + − + = − v u i (c) ( ) 2 2 2 2 2 9 3 4 0 1 107 = + + − + + = u u i (d) ( ) 2 2 2 2 2 9 3 4 0 1 107 = + + − + + = u (e) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 2 2 2 2 2 , 9 0 3 3 4 2 0 1 1 7 190 d = − + −− + −− + −− + − = u v Linear Algebra © 2007 Paul Dawkins 157 Just as we saw in the section on vectors if we have 1 = u then we will call u a unit vector and so the vector u from the previous set of examples is not a unit vector Now that we’ve gotten both the inner product and the norm taken care of we can give the following theorem. Theorem 3 Suppose u and v are two vectors in n \ and θ is the angle between them. Then, cosθ = u v u v i Of course since we are in n \ it is hard to visualize just what the angle between the two vectors is, but provided we can find it we can use this theorem. Also note that this was the definition of the dot product that we gave in the previous section and like that section this theorem is most useful for actually determining the angle between two vectors. The proof of this theorem is identical to the proof of Theorem 1 in the previous section and so isn’t given here. The next theorem is very important and has many uses in the study of vectors. In fact we’ll need it in the proof of at least one theorem in these notes. The following theorem is called the Cauchy-Schwarz Inequality. Theorem 4 Suppose u and v are two vectors in n \ then ≤ u v u v i Proof : This proof is surprisingly simple. We’ll start with the result of the previous theorem and take the absolve value of both sides. cosθ = u v u v i However, we know that cos 1 θ ≤ and so we get our result by using this fact. ( ) cos 1 θ = ≤ = u v u v u v u v i Here are some nice properties of the Euclidean norm. Theorem 5 Suppose u and v are two vectors in n \ and that c is a scalar then, (a) 0 ≥ u (b) 0 = u if and only if u=0. (c) c c = u u (d) + ≤ + u v u v - Usually called the Triangle Inequality The proof of the first two part is a direct consequence of the definition of the Euclidean norm and so won’t be given here. Linear Algebra © 2007 Paul Dawkins 158 Proof : (c) We’ll just run through the definition of the norm on this one. ( ) ( ) ( ) ( ) 2 2 2 1 2 2 2 2 2 1 2 2 2 2 1 2 n n n c cu cu cu c u u u c u u u c = + + + = + + + = + + + = u u " " " (d) The proof of this one isn’t too bad once you see the steps you need to take. We’ll start with the following. ( ) ( ) 2 + = + + u v u v u v i So, we’re starting with the definition of the norm and squaring both sides to get rid of the square root on the right side. Next, we’ll use the properties of the Euclidean inner product to simplify this. ( ) ( ) ( ) 2 2 + = + + + = + + + = + + u v u u v v u v u u u v v u v v u u u v v v i i i i i i i i i Now, notice that we can convert the first and third terms into norms so we’ll do that. Also, u v i is a number and so we know that if we take the absolute value of this we’ll have ≤ u v u v i i . Using this and converting the first and third terms to norms gives, ( ) 2 2 2 2 2 2 2 + = + + ≤ + + u v u u v v u u v v i i We can now use the Cauchy-Schwarz inequality on the second term to get, 2 2 2 2 + ≤ + + u v u u v v We’re almost done. Let’s notice that the left side can now be rewritten as, ( ) 2 2 + ≤ + u v u v Finally, take the square root of both sides. + ≤ + u v u v Example 2 Given ( ) 2,3,1, 1 = − − u and ( ) 7,1, 4, 2 = − − v verify the Cauchy-Schwarz inequality and the Triangle Inequality. Solution Let’s first verify the Cauchy-Schwarz inequality. To do this we need to following quantities. 14 3 4 2 13 4 9 1 1 15 49 1 16 4 70 = − + − + = − = + + + = = + + + = u v u v i Linear Algebra © 2007 Paul Dawkins 159 Now, verify the Cauchy-Schwarz inequality. 13 13 32.4037 15 70 = − = ≤ = = u v u v i Sure enough the Cauchy-Schwarz inequality holds true. To verify the Triangle inequality all we need is, ( ) 5,4, 3, 3 25 16 9 9 59 + = − − + = + + + = u v u v Now verify the Triangle Inequality. 59 7.6811 12.2396 15 70 + = = ≤ = + = + u v u v So, the Triangle Inequality is also verified for this problem. Here are some nice properties pertaining to the Euclidean distance. Theorem 6 Suppose u, v, and w are vectors in n \ then, (a) ( ) , 0 d ≥ u v (b) ( ) , 0 d = u v if and only if u=v. (c) ( ) ( ) , , d d = u v v u (d) ( ) ( ) ( ) , , , d d d ≤ + u v u w w v - Usually called the Triangle Inequality The proof of the first two parts is a direct consequence of the previous theorem and the proof of the third part is a direct consequence of the definition of distance and won’t be proven here. Proof (d) : Let’s start off with the definition of distance. ( ) , d = − u v u v Now, add in and subtract out w as follows, ( ) ( ) ( ) , d = − + − = − + − u v u w w v u w w v Next use the Triangle Inequality for norms on this. ( ) , d ≤ − + − u v u w w v Finally, just reuse the definition of distance again. ( ) ( ) ( ) , , , d d d ≤ + u v u w w v We have one final topic that needs to be generalized into Euclidean n-space. Definition 6 Suppose u and v are two vectors in n \ . We say that u and v are orthogonal if 0 = u v i . So, this definition of orthogonality is identical to the definition that we saw when we were dealing with 2 \ and 3 \ . Linear Algebra © 2007 Paul Dawkins 160 Here is the Pythagorean Theorem in n \ . Theorem 7 Suppose u and v are two orthogonal vectors in n \ then, 2 2 2 + = + u v u v Proof : The proof of this theorem is fairly simple. From the proof of the triangle inequality for norms we have the following statement. ( ) 2 2 2 2 + = + + u v u u v v i However, because u and v are orthogonal we have 0 = u v i and so we get, 2 2 2 + = + u v u v Example 3 Show that ( ) 3,0,1,0,4, 1 = − u and ( ) 2,5,0,2, 3, 18 = − − − v are orthogonal and verify that the Pythagorean Theorem holds. Solution Showing that these two vectors is easy enough. ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) 3 2 0 5 1 0 0 2 4 3 1 18 0 = − + + + + − + − − = u v i So, the Pythagorean Theorem should hold, but let’s verify that. Here’s the sum ( ) 1,5,1,2,1, 19 + = − u v and here’s the square of the norms. ( ) ( ) ( ) ( ) ( ) 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 5 1 2 1 19 393 3 0 1 0 4 1 27 2 5 0 2 3 18 366 + = + + + + + − = = + + + + + − = = − + + + + − + − = u v u v A quick computation then confirms that 2 2 2 + = + u v u v . We’ve got one more theorem that gives a relationship between the Euclidean inner product and the norm. This may seem like a silly theorem, but we’ll actually need this theorem towards the end of the next chapter. Theorem 8 If u and v are two vectors in n \ then, 2 2 1 1 4 4 = + − − u v u v u v i Proof : The proof here is surprisingly simple. First, start with, ( ) ( ) 2 2 2 2 2 2 2 2 + = + + − = − + u v u u v v u v u u v v i i The first of these we’ve seen a couple of times already and the second is derived in the same manner that the first was and so you should verify that formula. Linear Algebra © 2007 Paul Dawkins 161 Now subtract the second from the first to get, ( ) 2 2 4 = + − − u v u v u v i Finally, divide by 4 and we get the result we were after. 2 2 1 1 4 4 = + − − u v u v u v i In the previous section we saw the three standard basis vectors for 3 \ , i, j, and k. This idea can also be extended out to n \ . In n \ we will define the standard basis vectors or standard unit vectors to be, ( ) ( ) ( ) 1 2 1,0,0, ,0 0,1,0, ,0 0,0,0, ,1 n = = = e e e … … " … and just as we saw in that section we can write any vector ( ) 1 2 , , , n u u u = u … in terms of these standard basis vectors as follows, ( ) ( ) ( ) 1 2 1 1 2 2 1,0,0, 0 0,1,0, 0 0,0,0, 1 n n n u u u u u u = + + + = + + + u e e e … … " … " Note that in 3 \ we have 1 = e i , 2 = e j and 3 = e k . Now that we’ve gotten the general vector in Euclidean n-space taken care of we need to go back and remember some of the work that we did in the first chapter. It is often convenient to write the vector ( ) 1 2 , , , n u u u = u … as either a row matrix or a column matrix as follows, [ ] 1 2 1 2 n n u u u u u u ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ u u " # In this notation we can use matrix addition and scalar multiplication for matrices to show that we’ll get the same results as if we’d done vector addition and scalar multiplication for vectors on the original vectors. So, why do we do this? We’ll let’s use the column matrix notation for the two vectors u and v. 1 1 2 2 n n u v u v u v ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ u v # # Now compute the following matrix product. Linear Algebra © 2007 Paul Dawkins 162 [ ] [ ] [ ] 1 2 1 2 1 1 2 2 T n n n n u u v v v u v u v u v u ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = = + + + = = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ v u u v u v " " i i # So, we can think of the Euclidean inner product can be thought of as a matrix multiplication using, T = u v v u i provided we consider u and v as column vectors. The natural question this is just why is this important? Well let’s consider the following scenario. Suppose that u and v are two vectors in n \ and that A is an n n × matrix. Now consider the following inner product and write it as a matrix multiplication. ( ) ( ) T A A = u v v u i Now, rearrange the order of the multiplication and recall one of the properties of transposes. ( ) ( ) ( ) T T T A A A = = u v v u v u i Don’t forget that we switch the order on the matrices when we move the transpose out of the parenthesis. Finally, this last matrix product can be rewritten as an inner product. ( ) ( ) T A A = u v u v i i This tells us that if we’ve got an inner product and the first vector (or column matrix) is multiplied by a matrix then we can move that matrix to the second vector (or column matrix) if we simply take its transpose. A similar argument can also show that, ( ) ( ) T A A = u v u v i i Linear Algebra © 2007 Paul Dawkins 163 Linear Transformations In this section we’re going to take a look at a special kind of function that arises very naturally in the study of Linear Algebra and has many applications in fields outside of mathematics such as physics and engineering. This section is devoted mostly to the basic definitions and facts associated with this special kind of function. We will be looking at a couple of examples, but we’ll reserve most of the examples for the next section. Now, the first thing that we need to do is take a step back and make sure that we’re all familiar with some of the basics of functions in general. A function, f, is a rule (usually defined by an equation) that takes each element of the set A (called the domain) and associates it with exactly one element of a set B (called the codomain). The notation that we’ll be using to denote our function is : f A B → When we see this notation we know that we’re going to be dealing with a function that takes elements from the set A and associates them with elements from the set B. Note as well that it is completely possible that not every element of the set B will be associated with an element from A. The subset of all elements from B that are associated with elements from A is called the range. In this section we’re going to be looking at functions of the form, : n m f → \ \ In other words, we’re going to be looking at functions that take elements/points/vectors from n \ and associate them with elements/points/vectors from m \ . These kinds of functions are called transformations and we say that f maps n \ into m \ . On an element basis we will also say that f maps the element u from n \ to the element v from m \ . So, just what do transformations look like? Consider the following scenario. Suppose that we have m functions of the following form, ( ) ( ) ( ) 1 1 1 2 2 2 1 2 1 2 , , , , , , , , , n n m m n w f x x x w f x x x w f x x x = = = … … # # … Each of these functions takes a point in n \ , namely ( ) 1 2 , , , n x x x … , and maps it to the number i w . We can now define a transformation : n m T → \ \ as follows, ( ) ( ) 1 2 1 2 , , , , , , n m T x x x w w w = … … In this way we associate with each point ( ) 1 2 , , , n x x x … from n \ a point ( ) 1 2 , , , m w w w … from m \ and we have a transformation. Let’s take a look at a couple of transformations. Linear Algebra © 2007 Paul Dawkins 164 Example 1 Given 1 1 2 2 1 2 3 1 2 4 2 3 4 2 6 10 w x x w x x w x x w x = − = + = − = define 2 4 : T → \ \ as, ( ) ( ) ( ) ( ) 1 2 1 2 3 4 1 2 1 2 1 2 1 2 2 , , , , OR , 3 4 , 2 , 6 , 10 T x x w w w w T x x x x x x x x x = = − + − Note that the second form is more convenient since we don’t actually have to define any of the w’s in that way and is how we will define most of our transformations. We evaluate this just as we evaluate the functions that we’re used to working with. Namely, pick a point from 2 \ and plug into the transformation and we’ll get a point out of the function that is in 4 \ . For example, ( ) ( ) 5,2 23, 1, 32,20 T − = − −− Example 2 Define 3 2 : T → \ \ as ( ) ( ) 2 2 2 1 2 3 2 3 1 2 3 , , 4 , T x x x x x x x x = + − . A sample evaluation of this transformation is, ( ) ( ) 3, 1,6 40, 15 T − = Now, in this section we’re going to be looking at a special kind of transformation called a linear transformation. Here is the definition of a linear transformation. Definition 1 A function : n m T → \ \ is called a linear transformation if for all u and v in n \ and all scalars c we have, ( ) ( ) ( ) ( ) ( ) T T T T c cT + = + = u v u v u u We looked at two transformations above and only one of them is linear. Let’s take a look at each one and see what we’ve got. Example 3 Determine if the transformation from Example 2 is linear or not. Solution Okay, if this is going to be linear then it must satisfy both of the conditions from the definition. In other words, both of the following will need to be true. ( ) ( ) ( ) ( ) ( ) ( ) 1 1 2 2 3 3 1 2 3 1 2 3 , , , , , , T T u v u v u v T u u u T v v v T T + = + + + = + = + u v u v ( ) ( ) ( ) ( ) 1 2 3 1 2 3 , , , , T c T cu cu cu cT u u u cT = = = u u In this case let’s take a look at the second condition. Linear Algebra © 2007 Paul Dawkins 165 ( ) ( ) ( ) ( ) ( ) ( ) 1 2 3 2 2 2 2 2 2 2 2 3 1 2 3 2 2 2 2 2 3 1 2 3 2 , , 4 , 4 , T c T cu cu cu c u c u c u c u u c u u u u u c T cT = = + − = + − = ≠ u u u The second condition is not satisfied and so this is not a linear transformation. You might want to verify that in this case the first is also not satisfied. It’s not too bad, but the work does get a little messy. Example 4 Determine if the transformation in Example 1 is linear or not. Solution To do this one we’re going to need to rewrite things just a little. The transformation is defined as ( ) ( ) 1 2 1 2 3 4 , , , , T x x w w w w = where, 1 1 2 2 1 2 3 1 2 4 2 3 4 2 6 10 w x x w x x w x x w x = − = + = − = Now, each of the components are given by a system of linear (hhmm, makes one instantly wonder if the transformation is also linear…) equations and we saw in the first chapter that we can always write a system of linear equations in matrix form. Let’s do that for this system. 1 2 1 3 2 4 3 4 1 2 6 1 0 10 w w x A w x w − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⇒ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ w x Now, notice that if we plug in any column matrix x and do the matrix multiplication we’ll get a new column matrix out, w. Let’s pick a column matrix x totally at random and see what we get. 23 3 4 1 1 2 5 32 6 1 2 20 0 10 − − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ Of course, we didn’t pick x completely at random. Notice that x we choose was the column matrix representation of the point from 2 \ that we used in Example 1 to show a sample evaluation of the transformation. Just as importantly notice that the result, w, is the matrix representation of the point from 4 \ that we got out of the evaluation. In fact, this will always be the case for this transformation. So, in some way the evaluation Linear Algebra © 2007 Paul Dawkins 166 ( ) T x is the same as the matrix multiplication Ax and so we can write the transformation as ( ) T A = x x Notice that we’re kind of mixing and matching notation here. On the left x represents a point in 2 \ and on the right it is a 2 1 × matrix. However, this really isn’t a problem since they both can be used to represent a point in 2 \ . We will have to get used to this notation however as we’ll be using it quite regularly. Okay, just what were we after here. We wanted to determine if this transformation is linear or not. With this new way of writing the transformation this is actually really simple. We’ll just make use of some very nice facts that we know about matrix multiplication. Here is the work for this problem ( ) ( ) ( ) ( ) T A A A T T + = + = + = + u v u v u v u v ( ) ( ) ( ) T c A c c A cT = = = u u u u So, both conditions of the definition are met and so this transformation is a linear transformation. There are a couple of things to note here. First, we couldn’t write the transformation from Example 2 as a matrix multiplication because at least one of the equations (okay both in this case) for the components in the result were not linear. Second, when all the equations that give the components of the result are linear then the transformation will be linear. If at least one of the equations are not linear then the transformation will not be linear either. Now, we need to investigate the idea that we used in the previous example in more detail. There are two issues that we want to take a look at. First, we saw that, at least in some cases, matrix multiplication can be thought of as a linear transformation. As the following theorem shows, this is in fact always the case. Theorem 1 If A is an m n × matrix then its induced transformation, : n m A T → \ \ , defined as, ( ) A T A = x x is a linear transformation. Proof : The proof here is really simple and in fact we pretty much saw it last example. ( ) ( ) ( ) ( ) A A A T A A A T T + = + = + = + u v u v u v u v ( ) ( ) ( ) A A T c A c cA cT = = = u u u u So, the induced function, A T , satisfies both the conditions in the definition of a linear transformation and so it is a linear transformation. So, any time we do matrix multiplication we can also think of the operation as evaluating a linear transformation. Linear Algebra © 2007 Paul Dawkins 167 The other thing that we saw in Example 4 is that we were able, in that case, to write a linear transformation as a matrix multiplication. Again, it turns out that every linear transformation can be written as a matrix multiplication. Theorem 2 Let : n m T → \ \ be a linear transformation, then there is an m n × matrix such that A T T = (recall that A T is the transformation induced by A). The matrix A is called the matrix induced by T and is sometimes denoted as [ ] A T = . Proof : First let, ( ) ( ) ( ) 1 2 1,0,0, ,0 0,1,0, ,0 0,0,0, ,1 n = = = e e e … … " … be the standard basis vectors for n \ and define A to be the m n × matrix whose ith column is ( ) i T e . In other words, A is given by, ( ) ( ) ( ) 1 2 n A T T T = ⎡ ⎤ ⎣ ⎦ e e e " Next let x be any vector from n \ . We know that we can write x in terms of the standard basis vectors as follows, 1 2 1 1 2 2 n n n x x x x x x ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = = + + + ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ x e e e " # In order to prove this theorem we’re going to need to show that for any x (which we’ve got a nice general one above) we will have ( ) ( ) A T T = x x . So, let’s start off and plug x into T using the general form as written out above. ( ) ( ) 1 1 2 2 n n T T x x x = + + + x e e e " Now, we know that T is a linear transformation and so we can break this up at each of the “+”’s as follows, ( ) ( ) ( ) ( ) 1 1 2 2 n n T T x T x T x = + + + x e e e " Next, each of the i x ’s are scalars and again because T is a linear transformation we can write this as, ( ) ( ) ( ) ( ) 1 1 2 2 n n T x T x T x T = + + + x e e e " Next, let’s notice that this is nothing more than the following matrix multiplication. ( ) ( ) ( ) ( ) 1 2 1 2 n n x x T T T T x ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎡ ⎤ ⎣ ⎦⎢ ⎥ ⎢ ⎥ ⎣ ⎦ x e e e " # Linear Algebra © 2007 Paul Dawkins 168 But the first matrix nothing more than A and the second is just x and we when we define A as we did above we will get, ( ) ( ) A T A T = = x x x and so we’ve proven what we needed to. In this proof we used the standard basis vectors to define the matrix A. As we will see in a later chapter there are other choices of vectors that we could use here and these will produce a different induced matrix, A, and we do need to remember that. However, when we use the standard basis vectors to define A, as we’re going to in this chapter, then we don’t actually need to evaluate T at each of the basis vectors as we did in the proof. All we need to do is what we did in Example 4, write down the coefficient matrix for the system of equations that we get by writing out each of the components as individual equations. Okay, we’ve done a lot of work in this section and we haven’t really done any examples so we should probably do a couple of them. Note that we are saving most of the examples for the next section, so don’t expect a lot here. We’re just going to do a couple so we can say we’ve done a couple. Example 5 The zero transformation is the transformation : n m T → \ \ that maps every vector x in n \ to the zero vector in m \ , i.e. ( ) T = x 0 . The matrix induced by this transformation is the m n × zero matrix, 0 since, ( ) ( ) 0 T T = = = x x 0x 0 To make it clear we’re using the zero transformation we usually denote it by ( ) 0 T x . Example 6 The identity transformation is the transformation : n n T → \ \ (yes they are both n \ ) that maps every x to itself, i.e. ( ) T = x x . The matrix induced by this transformation is the n n × identity matrix, n I since, ( ) ( ) I n T T I = = = x x x x We’ll usually denote the identity transformation as ( ) I T x to make it clear we’re working with it. So, the two examples above are standard examples and we did need them taken care of. However, they aren’t really very illustrative for seeing how to construct the matrix induced by the transformation. To see how this is done, let’s take a look at some reflections in 2 \ . We’ll look at reflections in 3 \ in the next section. Linear Algebra © 2007 Paul Dawkins 169 Example 7 Determine the matrix induced by the following reflections. (a) Reflection about the x-axis. [Solution] (b) Reflection about the y-axis. [Solution] (c) Reflection about the line y x = . [Solution] Solution Note that all of these will be linear transformations of the form 2 2 : T → \ \ . (a) Reflection about the x-axis. Let’s start off with a sketch of what we’re looking for here. So, from this sketch we can see that the components of the for the translation (i.e. the equations that will map x into w) are, 1 2 w x w y = = − Remember that 1 w will be the first component of the transformed point and 2 w will be the second component of the transformed point. Now, just as we did in Example 4 we can write down the matrix form of this system. 1 2 1 0 0 1 w x w y ⎡ ⎤ ⎡ ⎤⎡⎤ = ⎢ ⎥ ⎢ ⎥⎢⎥ − ⎣ ⎦⎣⎦ ⎣ ⎦ So, it looks like the matrix induced by this reflection is, 1 0 0 1 ⎡ ⎤ ⎢ ⎥ − ⎣ ⎦ [Return to Problems] (b) Reflection about the y-axis. We’ll do this one a little quicker. Here’s a sketch and the equations for this reflection. Linear Algebra © 2007 Paul Dawkins 170 1 2 w x w y = − = The matrix induced by this reflection is, 1 0 0 1 − ⎡ ⎤ ⎢ ⎥ ⎣ ⎦ [Return to Problems] (c) Reflection about the line y x = . Here’s the sketch and equations for this reflection. 1 2 w y w x = = The matrix induced by this reflection is, 0 1 1 0 ⎡ ⎤ ⎢ ⎥ ⎣ ⎦ [Return to Problems] Hopefully, from these examples you’re starting to get a feel for how we arrive at the induced matrix for a linear transformation. We’ll be seeing more of these in the next section, but for now we need to move on to some more ideas about linear transformations. Let’s suppose that we have two linear transformations induced by the matrices A and B, : n k A T → \ \ and : k m B T → \ \ . If we take any x out of n \ A T will map x into k \ . In other Linear Algebra © 2007 Paul Dawkins 171 words, ( ) A T x will be in k \ and notice that we can then apply B T to this and its image will be in m \ . In summary, if we take x out of n \ and first apply A T to x and then apply B T to the result we will have a transformation from n \ to m \ . This process is called composition of transformations and is denoted as ( )( ) ( ) ( ) B A B A T T T T = x x D Note that the order here is important. The first transformation to be applied is on the right and the second is on the left. Now, because both of our original transformations were linear we can do the following, ( )( ) ( ) ( ) ( ) ( ) B A B A B T T T T T A BA = = = x x x x D and so the composition B A T T D is the same as multiplication by BA. This means that the composition will be a linear transformation provided the two original transformations were also linear. Note as well that we can do composition with as many transformations as we want provided all the spaces correctly match up. For instance with three transformations we require the following three transformations, : : : n k k p p m A B C T T T → → → \ \ \ \ \ \ and in this case the composition would be, ( )( ) ( ) ( ) ( ) ( ) C B A C B A T T T T T T CBA = = x x x D D Let’s take a look at a couple of examples. Example 8 Determine the matrix inducted by the composition of reflection about the y-axis followed by reflection about the x-axis. Solution First, notice that reflection about the y-axis should change the sign on the x coordinate and following this by a reflection about the x-axis should change the sign on the y coordinate. The two transformations here are, 2 2 2 2 1 0 : reflection about -axis 0 1 1 0 : reflection about -axis 0 1 A B T A y T B x − ⎡ ⎤ → = ⎢ ⎥ ⎣ ⎦ ⎡ ⎤ → = ⎢ ⎥ − ⎣ ⎦ \ \ \ \ The matrix induced by the composition is then, 1 0 1 0 1 0 0 1 0 1 0 1 B A T T BA − − ⎡ ⎤⎡ ⎤ ⎡ ⎤ = = = ⎢ ⎥⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦⎣ ⎦ ⎣ ⎦ D Let’s take a quick look at what this does to a point. Given x in 2 \ we have, ( )( ) 1 0 0 1 B A x x T T y y − − ⎡ ⎤⎡⎤ ⎡ ⎤ = = ⎢ ⎥⎢⎥ ⎢ ⎥ − − ⎣ ⎦⎣⎦ ⎣ ⎦ x D This is what we expected to get. This is often called reflection about the origin. Linear Algebra © 2007 Paul Dawkins 172 Example 9 Determine the matrix inducted by the composition of reflection about the y-axis followed by another reflection about the y-axis. Solution In this case if we reflect about the y-axis twice we should end right back where we started. The two transformations in this case are, 2 2 2 2 1 0 : reflection about -axis 0 1 1 0 : reflection about -axis 0 1 A B T A y T B y − ⎡ ⎤ → = ⎢ ⎥ ⎣ ⎦ − ⎡ ⎤ → = ⎢ ⎥ ⎣ ⎦ \ \ \ \ The induced matrix is, 2 1 0 1 0 1 0 0 1 0 1 0 1 B A T T BA I − − ⎡ ⎤⎡ ⎤ ⎡ ⎤ = = = = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ D So, the composition of these two transformations yields the identity transformation. So, ( )( ) ( ) B A I T T T = = x x x D and the composition will not change the original x as we guessed. Linear Algebra © 2007 Paul Dawkins 173 Examples of Linear Transformations This section is going to be mostly devoted to giving the induced matrices for a variety of standard linear transformations. We will be working exclusively with linear transformations of the form 2 2 : T → \ \ and 3 3 : T → \ \ and for the most part we’ll be providing equations and sketches of the transformations in 2 \ but we’ll just be providing equations for the 3 \ cases. Let’s start this section out with two of the transformations we looked at in the previous section just so we can say we’ve got all the main examples here in one section. Zero Transformation In this case very vector x is mapped to the zero vector and so the transformation is, ( ) ( ) 0 T T = x x and the induced matrix is the zero matrix, 0. Identity Transformation The identity transformation will map every vector x to itself. The transformation is, ( ) ( ) I T T = x x and so the induced matrix is the identity matrix. Reflections We saw a variety of reflections in 2 \ in the previous section so we’ll give those again here again along with some reflections in 3 \ so we can say that we’ve got them all in one place. Reflection Equations Induced Matrix Reflection about x-axis in 2 \ 1 2 w x w y = = − 1 0 0 1 ⎡ ⎤ ⎢ ⎥ − ⎣ ⎦ Reflection about y-axis in 2 \ 1 2 w x w y = − = 1 0 0 1 − ⎡ ⎤ ⎢ ⎥ ⎣ ⎦ Reflection about line x y = in 2 \ 1 2 w y w x = = 0 1 1 0 ⎡ ⎤ ⎢ ⎥ ⎣ ⎦ Reflection about origin in 2 \ 1 2 w x w y = − = − 1 0 0 1 − ⎡ ⎤ ⎢ ⎥ − ⎣ ⎦ Reflection about xy-plane in 3 \ 1 2 3 w x w y w z = = = − 1 0 0 0 1 0 0 0 1 ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Reflection about yz-plane in 3 \ 1 2 3 w x w y w z = − = = 1 0 0 0 1 0 0 0 1 − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Reflection about xz-plane in 3 \ 1 2 3 w x w y w z = = − = 1 0 0 0 1 0 0 0 1 ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Linear Algebra © 2007 Paul Dawkins 174 Note that in the 3 \ when we say we’re reflecting about a given plane, say the xy-plane, all we’re doing is moving from above the plane to below the plane (or visa-versa of course) and this means simply changing the sign of the other variable, z in the case of the xy-plane. Orthogonal Projections We first saw orthogonal projections in the section on the dot product. In that section we looked at projections only in the 2 \ , but as we’ll see eventually they can be done in any setting. Here we are going to look at some special orthogonal projections. Let’s start with the orthogonal projections in 2 \ . There are two of them that we want to look at. Here is a quick sketch of both of these. So, we project x onto the x-axis or y-axis depending upon which we’re after. Of course we also have a variety of projections in 3 \ as well. We could project onto one of the three axes or we could project onto one of the three coordinate planes. Here are the orthogonal projections we’re going to look at in this section, their equations and their induced matrix. Orthogonal Projection Equations Induced Matrix Projection on x-axis in 2 \ 1 2 0 w x w = = 1 0 0 0 ⎡ ⎤ ⎢ ⎥ ⎣ ⎦ Projection on y-axis in 2 \ 1 2 0 w w y = = 0 0 0 1 ⎡ ⎤ ⎢ ⎥ ⎣ ⎦ Projection on x-axis in 3 \ 1 2 3 0 0 w x w w = = = 1 0 0 0 0 0 0 0 0 ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Projection on y-axis in 3 \ 1 2 3 0 0 w w y w = = = 0 0 0 0 1 0 0 0 0 ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Linear Algebra © 2007 Paul Dawkins 175 Projection on z-axis in 3 \ 1 2 3 0 0 w w w z = = = 0 0 0 0 0 0 0 0 1 ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Projection on xy-plane in 3 \ 1 2 3 0 w x w y w = = = 1 0 0 0 1 0 0 0 0 ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Projection on yz-plane in 3 \ 1 2 3 0 w w y w z = = = 0 0 0 0 1 0 0 0 1 ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Projection on xz-plane in 3 \ 1 2 3 0 w x w w z = = = 1 0 0 0 0 0 0 0 1 ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Contractions & Dilations These transformations are really just fancy names for scalar multiplication, c = w x , where c is a nonnegative scalar. The transformation is called a contraction if 0 1 c ≤ ≤ and a dilation if 1 c ≥. The induced matrix is identical for both a contraction and a dilation and so we’ll not give separate equations or induced matrices for both. Contraction/Dilation Equations Induced Matrix Contraction/Dilation in 2 \ 1 2 w cx w cy = = 0 0 c c ⎡ ⎤ ⎢ ⎥ ⎣ ⎦ Contraction/Dilation in 3 \ 1 2 3 w cx w cy w cz = = = 0 0 0 0 0 0 c c c ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Rotations We’ll start this discussion in 2 \ . We’re going to start with a vector x and we want to rotate that vector through an angle θ in the counter-clockwise manner as shown below. Linear Algebra © 2007 Paul Dawkins 176 Unlike the previous transformation where we could just write down the equations we’ll need to do a little derivation work here. First, from our basic knowledge of trigonometry we know that cos sin x r y r α α = = and we also know that ( ) ( ) 1 2 cos sin w r w r α θ α θ = + = + Now, through a trig formula we can write the equations for w as follows, ( ) ( ) ( ) ( ) 1 2 cos cos sin sin cos sin sin cos w r r w r r α θ α θ α θ α θ = − = + Notice that the formulas for x and y both show up in these formulas so substituting in for those gives, 1 2 cos sin sin cos w x y w x y θ θ θ θ = − = + Finally, since θ is a fixed angle cosθ and sinθ are fixed constants and so there are our equations and the induced matrix is, cos sin sin cos θ θ θ θ − ⎡ ⎤ ⎢ ⎥ ⎣ ⎦ In 3 \ we also have rotations but the derivations are a little trickier. The three that we’ll be giving here are counter-clockwise rotation about the three positive coordinate axes. Here is a table giving all the rotational equation and induced matrices. Rotation Equations Induced Matrix Counter-clockwise rotation through an angle θ in 2 \ 1 2 cos sin sin cos w x y w x y θ θ θ θ = − = + cos sin sin cos θ θ θ θ − ⎡ ⎤ ⎢ ⎥ ⎣ ⎦ Counter-clockwise rotation trough an angle of θ about the positive x-axis in 3 \ 1 1 2 cos sin sin cos w x w y z w y z θ θ θ θ = = − = + 1 0 0 0 cos sin 0 sin cos θ θ θ θ ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Counter-clockwise rotation trough an angle of θ about the positive y-axis in 3 \ 1 1 2 cos sin cos sin w x z w y w z x θ θ θ θ = + = = − cos 0 sin 0 1 0 sin 0 cos θ θ θ θ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Counter-clockwise rotation trough an angle of θ about the positive z-axis in 3 \ 1 1 2 cos sin sin cos w x y w x y w z θ θ θ θ = − = + = cos sin 0 sin cos 0 0 0 1 θ θ θ θ − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Okay, we’ve given quite a few general formulas here, but we haven’t worked any examples with numbers in them so let’s do that. Linear Algebra © 2007 Paul Dawkins 177 Example 1 Determine the new point after applying the transformation to the given point. Use the induced matrix associated with each transformation to find the new point. (a) ( ) 2, 4,1 = − x reflected about the xz-plane. (b) ( ) 10,7, 9 = − x projected on the x-axis. (c) ( ) 10,7, 9 = − x projected on the yz-plane. Solution So, it would be easier to just do all of these directly rather than using the induced matrix, but at least this way we can verify that the induced matrix gives the correct value. (a) Here’s the multiplication for this one. 1 0 0 2 2 0 1 0 4 4 0 0 1 1 1 ⎡ ⎤⎡ ⎤ ⎡⎤ ⎢ ⎥⎢ ⎥ ⎢⎥ = − − = ⎢ ⎥⎢ ⎥ ⎢⎥ ⎢ ⎥⎢ ⎥ ⎢⎥ ⎣ ⎦⎣ ⎦ ⎣⎦ w So, the point ( ) 2, 4,1 = − x maps to ( ) 2,4,1 = w under this transformation. (b) The multiplication for this problem is, 1 0 0 10 10 0 0 0 7 0 0 0 0 9 0 ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − ⎣ ⎦⎣ ⎦ ⎣ ⎦ w The projection here is ( ) 10,0,0 = w (c) The multiplication for the final transformation in this set is, 0 0 0 10 0 0 1 0 7 7 0 0 1 9 9 ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦⎣ ⎦ ⎣ ⎦ w The projection here is ( ) 0,7, 9 = − w . Let’s take a look at a couple of rotations. Example 2 Determine the new point after applying the transformation to the given point. Use the induced matrix associated with each transformation to find the new point. (a) ( ) 2, 6 = − x rotated 30° in the counter-clockwise direction. (b) ( ) 0,5,1 = x rotated 90° in the counter-clockwise direction about the y-axis. (c) ( ) 3,4, 2 = − − x rotated 25° in the counter-clockwise direction about the z-axis. Solution There isn’t much to these other than plugging into the appropriate induced matrix and then doing the multiplication. Linear Algebra © 2007 Paul Dawkins 178 (a) Here is the work for this rotation. 3 1 2 2 3 1 2 2 cos30 sin30 2 2 3 3 sin30 cos30 6 6 1 3 3 ⎡ ⎤ ⎡ ⎤ − − + ⎡ ⎤⎡ ⎤ ⎡ ⎤ = = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ w The new point after this rotation is then, ( ) 3 3, 1 3 3 = + − w . (b) The matrix multiplication for this rotation is, cos90 0 sin90 0 0 0 1 0 1 0 1 0 5 0 1 0 5 5 sin90 0 cos90 1 1 0 0 1 0 ⎡ ⎤⎡⎤ ⎡ ⎤⎡⎤ ⎡⎤ ⎢ ⎥⎢⎥ ⎢ ⎥⎢⎥ ⎢⎥ = = = ⎢ ⎥⎢⎥ ⎢ ⎥⎢⎥ ⎢⎥ ⎢ ⎥⎢⎥ ⎢ ⎥⎢⎥ ⎢⎥ − − ⎣ ⎦⎣⎦ ⎣ ⎦⎣⎦ ⎣⎦ w The point after this rotation becomes ( ) 1,5,0 = w . Note that we could have predicted this one. The original point was in the yz-plane (because the x component is zero) and a 90° counter-clockwise rotation about the y-axis would put the new point in the xy-plane with the z component becoming the x component and that is exactly what we got. (c) Here’s the work for this part and notice that the angle is not one of the “standard” trig angles and so the answers will be in decimals. cos25 sin 25 0 3 sin 25 cos25 0 4 0 0 1 2 0.9063 0.4226 0 3 0.4226 0.9063 0 4 0 0 1 2 4.4093 2.3574 2 − − ⎡ ⎤⎡ ⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ − ⎣ ⎦⎣ ⎦ − − ⎡ ⎤⎡ ⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ − ⎣ ⎦⎣ ⎦ − ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ w The new point under this rotation is then ( ) 4.4093,2.3574, 2 = − − w . Finally, let’s take a look at some compositions of transformations. Example 3 Determine the new point after applying the transformation to the given point. Use the induced matrix associated with each transformation to find the new point. (a) Dilate ( ) 4, 1, 3 = −− x by 2 (i.e. 2x) and then project on the y-axis. (b) Project ( ) 4, 1, 3 = −− x on the y-axis and then dilate by 2. (c) Project ( ) 4,2 = x on the x-axis and the rotate by 45° counter-clockwise. (d) Rotate ( ) 4,2 = x 45° counter-clockwise and then project on the x-axis. Solution Notice that the first two are the same translations just done in the opposite order and the same is true for the last two. Do you expect to get the same result from each composition regardless of the order the transformations are done? Linear Algebra © 2007 Paul Dawkins 179 Recall as well that in compositions we can get the induced matrix by multiplying the induced matrices from each transformation from the right to left in the order they are applied. For instance the induced matrix for the composition B A T T D is BA where A T is the first transformation applied to the point. (a) Dilate ( ) 4, 1, 3 = −− x by 2 (i.e. 2x) and then project on the y-axis. The induced matrix for this composition is, Project on -axis Dilate by 2 Composition 0 0 0 2 0 0 0 0 0 0 1 0 0 2 0 0 2 0 0 0 0 0 0 2 0 0 0 y ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦    The matrix multiplication for the new point is then, 0 0 0 4 0 0 2 0 1 2 0 0 0 3 0 ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − = − ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − ⎣ ⎦⎣ ⎦ ⎣ ⎦ The new point is then ( ) 0, 2,0 = − w . (b) Project ( ) 4, 1, 3 = −− x on the y-axis and then dilate by 2. In this case the induced matrix is, Dilate by 2 Project on -axis Composition 2 0 0 0 0 0 0 0 0 0 2 0 0 1 0 0 2 0 0 0 2 0 0 0 0 0 0 y ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦    So, in this case the induced matrix for the composition is the same as the previous part. Therefore, the new point is also the same, ( ) 0, 2,0 = − w . (c) Project ( ) 4,2 = x on the x-axis and the rotate by 45° counter-clockwise. Here is the induced matrix for this composition. 2 2 2 2 2 2 2 2 2 2 2 2 Rotate by 45 Project on -axis 0 cos45 sin 45 1 0 1 0 sin 45 cos45 0 0 0 0 0 x ° ⎡ ⎤ ⎡ ⎤ − − ⎡ ⎤⎡ ⎤ ⎡ ⎤ = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦   The matrix multiplication for new point after applying this composition is, 2 2 2 2 0 4 2 2 2 0 2 2 ⎡ ⎤ ⎡ ⎤⎡⎤ = = ⎢ ⎥ ⎢ ⎥⎢⎥ ⎣⎦ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ w Linear Algebra © 2007 Paul Dawkins 180 The new point is then, ( ) 2 2, 2 2 = w (d) Rotate ( ) 4,2 = x 45° counter-clockwise and then project on the x-axis. The induced matrix for the final composition is, 2 2 2 2 2 2 2 2 2 2 2 2 Project on Rotate by 45 -axis 1 0 cos45 sin 45 1 0 0 0 sin 45 cos45 0 0 0 0 x ° ⎡ ⎤ − ⎡ ⎤ − ⎡ ⎤⎡ ⎤ ⎡ ⎤ − = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦⎢ ⎥ ⎣ ⎦ ⎣ ⎦   Note that this is different from the induced matrix from (c) and so we should expect the new point to also be different. The fact that the induced matrix is different shouldn’t be too surprising given that matrix multiplication is not a commutative operation. The matrix multiplication for the new point is, 2 2 2 2 4 2 2 0 0 0 ⎡ ⎤ ⎡ ⎤⎡⎤ − = = ⎢ ⎥ ⎢ ⎥⎢⎥ ⎣⎦ ⎣ ⎦ ⎣ ⎦ w The new point is then, ( ) 2, 0 = w and as we expected it was not the same as that from part (c). So, as this example has shown us transformation composition is not necessarily commutative and so we shouldn’t expect that to happen in most cases. Linear Algebra © 2007 Paul Dawkins 181 Vector Spaces Introduction In the previous chapter we looked at vectors in Euclidean n-space and while in 2 \ and 3 \ we thought of vectors as directed line segments. A vector however, is a much more general concept and it doesn’t necessarily have to represent a directed line segment in 2 \ or 3 \ . Nor does a vector have to represent the vectors we looked at in n \ . As we’ll soon see a vector can be a matrix or a function and that’s only a couple of possibilities for vectors. With all that said a good many of our examples will be examples from n \ since that is a setting that most people are familiar with and/or can visualize. We will however try to include the occasional example that does not lie in n \ . The main idea of study in this chapter is that of a vector space. A vector space is nothing more than a collection of vectors (whatever those now are…) that satisfies a set of axioms. Once we get the general definition of a vector and a vector space out of the way we’ll look at many of the important ideas that come with vector spaces. Towards the end of the chapter we’ll take a quick look at inner product spaces. Here is a listing of the topics in this chapter. Vector Spaces – In this section we’ll formally define vectors and vector spaces. Subspaces – Here we will be looking at vector spaces that live inside of other vector spaces. Span – The concept of the span of a set of vectors will be investigated in this section. Linear Independence – Here we will take a look at what it means for a set of vectors to be linearly independent or linearly dependent. Basis and Dimension – We’ll be looking at the idea of a set of basis vectors and the dimension of a vector space. Change of Basis – In this section we will see how to change the set of basis vectors for a vector space. Fundamental Subspaces – Here we will take a look at some of the fundamental subspaces of a matrix, including the row space, column space and null space. Inner Product Spaces – We will be looking at a special kind of vector spaces in this section as well as define the inner product. Orthonormal Basis – In this section we will develop and use the Gram-Schmidt process for constructing an orthogonal/orthonormal basis for an inner product space. Linear Algebra © 2007 Paul Dawkins 182 Least Squares – In this section we’ll take a look at an application of some of the ideas that we will be discussing in this chapter. QR-Decomposition – Here we will take a look at the QR-Decomposition for a matrix and how it can be used in the least squares process. Orthogonal Matrices – We will take a look at a special kind of matrix, the orthogonal matrix, in this section. Linear Algebra © 2007 Paul Dawkins 183 Vector Spaces As noted in the introduction to this chapter vectors do not have to represent directed line segments in space. When we first start looking at many of the concepts of a vector space we usually start with the directed line segment idea and their natural extension to vectors in n \ because it is something that most people can visualize and get their hands on. So, the first thing that we need to do in this chapter is to define just what a vector space is and just what vectors really are. However, before we actually do that we should point out that because most people can visualize directed line segments most of our examples in these notes will revolve around vectors in n \ . We will try to always include an example or two with vectors that aren’t in n \ just to make sure that we don’t forget that vectors are more general objects, but the reality is that most of the examples will be in n \ . So, with all that out of the way let’s go ahead and get the definition of a vector and a vector space out of the way. Definition 1 Let V be a set on which addition and scalar multiplication are defined (this means that if u and v are objects in V and c is a scalar then we’ve defined + u v and cu in some way). If the following axioms are true for all objects u, v, and w in V and all scalars c and k then V is called a vector space and the objects in V are called vectors. (a) + u v is in V – This is called closed under addition. (b) cu is in V – This is called closed under scalar multiplication. (c) + = + u v v u (d) ( ) ( ) + + = + + u v w u v w (e) There is a special object in V, denoted 0 and called the zero vector, such that for all u in V we have + = + = u 0 0 u u . (f) For every u in V there is another object in V, denoted −u and called the negative of u, such that ( ) − = + − = u u u u 0 . (g) ( ) c c c + = + u v u v (h) ( ) c k c k + = + u u u (i) ( ) ( ) c k ck = u u (j) 1 = u u We should make a couple of comments about these axioms at this point. First, do not get too locked into the “standard” ways of defining addition and scalar multiplication. For the most part we will be doing addition and scalar multiplication in a fairly standard way, but there will be the occasional example where we won’t. In order for something to be a vector space it simply must have an addition and scalar multiplication that meets the above axioms and it doesn’t matter how strange the addition of scalar multiplication might be. Next, the first two axioms may seem a little strange at first glance. It might seem like these two will be trivially true for any definition of addition or scalar multiplication, however, we will see at least one example in this section of a set that is not closed under a particular scalar multiplication. Linear Algebra © 2007 Paul Dawkins 184 Finally, with the exception of the first two these axioms should all seem familiar to you. All of these axioms were in one of the theorems from the discussion on vectors and/or Euclidean n-space in the previous chapter. However, in this case they aren’t properties, they are axioms. What that means is that they aren’t to be proven. Axioms are simply the rules under which we’re going to operate when we work with vector spaces. Given a definition of addition and scalar multiplication we’ll simply need to verify that the above axioms are satisfied by our definitions. We should also make a quick comment about the scalars that we’ll be using here. To this point, and in all the examples we’ll be looking at in the future, the scalars are real numbers. However, they don’t have to be real numbers. They could be complex numbers. When we restrict the scalars to real numbers we generally call the vector space a real vector space and when we allow the scalars to be complex numbers we generally call the vector space a complex vector space. We will be working exclusively with real vector spaces and from this point on when we see vector space it is to be understood that we mean a real vector space. We should now look at some examples of vector spaces and at least a couple of examples of sets that aren’t vector spaces. Some of these will be fairly standard vector spaces while others may seem a little strange at first but are fairly important to other areas of mathematics. Example 1 If n is any positive integer then the set n V = \ with the standard addition and scalar multiplication as defined in the Euclidean n-space section is a vector space. Technically we should show that the axioms are all met here, however that was done in Theorem 1 from the Euclidean n-space section and so we won’t do that for this example. Note that from this point on when we refer to the standard vector addition and standard vector scalar multiplication we are referring to that we defined in the Euclidean n-space section. Example 2 The set 2 V = \ with the standard vector addition and scalar multiplication defined as, ( ) ( ) 1 2 1 2 , , c u u u cu = is NOT a vector space. Showing that something is not a vector space can be tricky because it’s completely possible that only one of the axioms fails. In this case because we’re dealing with the standard addition all the axioms involving the addition of objects from V (a, c, d, e, and f) will be valid. Also, in this case of all the axioms involving the scalar multiplication (b, g, h, i, and j), only (h) is not valid. We’ll show this in a bit, but the point needs to be made here that only one of the axioms will fail in this case and that is enough for this set under this definition of addition and multiplication to not be a vector space. First we should at least show that the set meets axiom (b) and this is easy enough to show, in that we can see that the result of the scalar multiplication is again a point in 2 \ and so the set is closed under scalar multiplication. Again, do not get used to this happening. We will see at least one example later in this section of a set that is not closed under scalar multiplication as we’ll define it there. Linear Algebra © 2007 Paul Dawkins 185 Now, to show that (h) is not valid we’ll need to compute both sides of the equality and show that they aren’t equal. ( ) ( )( ) ( ) ( ) ( ) 1 2 1 2 1 2 2 , , , c k c k u u u c k u u cu ku + = + = + = + u ( ) ( ) ( ) ( ) ( ) 1 2 1 2 1 2 1 2 1 2 2 , , , , 2 , c k c u u k u u u cu u ku u cu ku + = + = + = + u u So, we can see that ( ) c k c k + ≠ + u u u because the first components are not the same. This means that axiom (h) is not valid for this definition of scalar multiplication. We’ll not verify that the remaining scalar multiplication axioms are valid for this definition of scalar multiplication. We’ll leave those to you. All you need to do is compute both sides of the equal sign and show that you get the same thing on each side. Example 3 The set 3 V = \ with the standard vector addition and scalar multiplication defined as, ( ) ( ) 1 2 3 3 , , 0,0, c u u u cu = is NOT a vector space. Again, there is a single axiom that fails in this case. We’ll leave it to you to verify that the others hold. In this case it is the last axiom, (j), that fails as the following work shows. ( ) ( ) ( ) ( ) ( ) 1 2 3 3 3 1 2 3 1 1 , , 0,0, 1 0,0, , , u u u u u u u u = = = ≠ = u u Example 4 The set 2 V = \ with the standard scalar multiplication and addition defined as, ( ) ( ) ( ) 1 2 1 2 1 1 2 2 , 2 , u u v v u v u v + + = + + Is NOT a vector space. To see that this is not a vector space let’s take a look at the axiom (c). ( ) ( ) ( ) 1 2 1 2 1 1 2 2 , 2 , u u v v u v u v + = + + = + + u v ( ) ( ) ( ) 1 2 1 2 1 1 2 2 , 2 , v v u u v u v u + = + + = + + v u So, because only the first component of the second point listed gets multiplied by 2 we can see that + ≠ + u v v u and so this is not a vector space. You should go through the other axioms and determine if they are valid or not for the practice. So, we’ve now seen three examples of sets of the form n V = \ that are NOT vector spaces so hopefully it is clear that there are sets out there that aren’t vector spaces. In each case we had to change the definition of scalar multiplication or addition to make the set fail to be a vector space. However, don’t read too much into that. It is possible for a set under the standard scalar multiplication and addition to fail to be a vector space as we’ll see in a bit. Likewise, it’s possible for a set of this form to have a non-standard scalar multiplication and/or addition and still be a vector space. In fact, let’s take a look at the following example. This is probably going to be the only example that we’re going to go through and do in excruciating detail in this section. We’re doing this for two reasons. First, you really should see all the detail that needs to go into actually showing that a set along with a definition of addition and scalar multiplication is a vector space. Second, our Linear Algebra © 2007 Paul Dawkins 186 definitions are NOT going to be standard here and it would be easy to get confused with the details if you had to go through them on your own. Example 5 Suppose that the set V is the set of positive real numbers (i.e. 0 x > ) with addition and scalar multiplication defined as follows, c x y xy cx x + = = This set under this addition and scalar multiplication is a vector space. First notice that we’re taking V to be only a portion of \ . If we took it to be all of \ we would not have a vector space. Next, do not get excited about the definitions of “addition” and “scalar multiplication” here. Even though they are not they are not addition and scalar multiplication as we think of them we are still going to call them the addition and scalar multiplication operations for this vector space. Okay, let’s go through each of the axioms and verify that they are valid. First let’s take a look at the closure axioms, (a) and (b). Since by x and y are positive numbers their product xy is a positive real number and so the V is closed under addition. Since x is positive then for any c c x is a positive real number and so V is closed under scalar multiplication. Next we’ll verify (c). We’ll do this one with some detail pointing out how we do each step. First assume that x and y are any two elements of V (i.e. they are two positive real numbers). We’ll now verify (d). Again, we’ll make it clear how we’re going about each step with this one. Assume that x, y, and z are any three elements of V. Next we need to find the zero vector, 0, and we need to be careful here. We use 0 to denote the zero vector but it does NOT have to be the number zero. In fact in this case it can’t be zero if for no other reason than the fact that the number zero isn’t in the set V ! We need to find an element that is in V so that under our definition of addition we have, x x x + = + = 0 0 It looks like we should define the “zero vector” in this case as : 0=1. In other words the zero vector for this set will be the number 1! Let’s see how that works and remember that our Linear Algebra © 2007 Paul Dawkins 187 “addition” here is really multiplication and remember to substitute the number 1 in for 0. If x is any element of V, 1 & 1 x x x x x x + = ⋅= + = ⋅ = 0 0 Sure enough that does what we want it to do. We next need to define the negative, x −, for each element x that is in V. As with the zero vector to not confuse x − with “minus x”, this is just the notation we use to denote the negative of x. In our case we need an element of V (so it can’t be minus x since that isn’t in V) such that ( ) x x + − = 0 and remember that 0=1 in our case! Given an x in V we know that x is strictly positive and so 1 x is defined (since x isn’t zero) and is positive (since x is positive) and therefore 1 x is in V. Also, under our definition of addition and the zero vector we have, ( ) 1 1 x x x x + − = ⋅ = = 0 Therefore, for the set V the negative of x is 1 x x −= . So, at this point we’ve taken care of the closure and addition axioms we not just need to deal with the axioms relating to scalar multiplication. We’ll start with (g). We’ll do this one in some detail so you can see what we’re doing at each step. If x and y are any two elements of V and c is any scalar then, So, it looks like we’ve verified (g). Let’s now verify (h). If x is any element of V and c and k are any two scalars then, Linear Algebra © 2007 Paul Dawkins 188 So, this axiom is verified. Now, let’s verify (i). If x is any element of V and c and k are any two scalars then, We’ve got the final axiom to go here and that’s a fairly simple one to verify. 1 1x x x = = Just remember that 1x is the notation for scalar multiplication and NOT multiplication of x by the number 1. Okay, that was a lot of work and we’re not going to be showing that much work in the remainder of the examples that are vector spaces. We’ll leave that up to you to check most of the axioms now that you’ve seen one done completely out. For those examples that aren’t a vector space we’ll show the details on at least one of the axioms that fails. For these examples you should check the other axioms to see if they are valid or fail. Example 6 Let the set V be the points on a line through the origin in 2 \ with the standard addition and scalar multiplication. Then V is a vector space. First, let’s think about just what V is. The set V is all the points that are on some line through the origin in 2 \ . So, we know that the line must have the equation, 0 ax by + = for some a and some b, at least one not zero. Also note that a and b are fixed constants and aren’t allowed to change. In other words we are always on the same line. Now, a point ( ) 1 1 , x y will be on the line, and hence in V, provided it satisfies the equation above. We’ll show that V is closed under addition and scalar multiplication and leave it to you to verify the remaining axioms. Let’s first show that V is closed under addition. To do this we’ll need the sum of two random points from V, say ( ) 1 1 , x y = u and ( ) 2 2 , x y = v , and we’ll need to show that ( ) 1 2 1 2 , x x y y + = + + u v is also in V. This amounts to showing that this point satisfies the equation of the line and that’s fairly simple to do, just plug the coordinates into the equation and verify we get zero. ( ) ( ) ( ) ( ) 1 2 1 2 1 1 2 2 0 0 0 a x x b y y ax by ax by + + + = + + + = + = So, after some rearranging and using the fact that both u and v were both in V (and so satisfied the equation of the line) we see that the sum also satisfied the line and so is in V. We’ve now shown that V is closed under addition. To show that V is closed under scalar multiplication we’ll need to show that for any u from V and Linear Algebra © 2007 Paul Dawkins 189 any scalar, c, the ( ) 1 1 , c cx cy = u is also in V. This is done pretty much as we did closed under addition. ( ) ( ) ( ) ( ) 1 1 1 1 0 0 a cx b cy c ax by c + = + = = So, cu is on the line and hence in V. V is therefore closed under scalar multiplication. Again we’ll leave it to you to verify the remaining axioms. Note however, that because we’re working with the standard addition that the zero vector and negative are the standard zero vector and negative that we’re used to dealing with, ( ) ( ) ( ) 1 1 1 1 0,0 , , x y x y = −= − = − − 0 u Note that we can extend this example to a line through the origin in n \ and still have a vector space. Showing that this set is a vector space can be a little difficult if you don’t know the equation of a line in n \ however, as many of you probably don’t, and so we won’t show the work here. Example 7 Let the set V be the points on a line that does NOT go through the origin in 2 \ with the standard addition and scalar multiplication. Then V is not a vector space. In this case the equation of the line will be, ax by c + = for fixed constants a, b, and c where at least one of a and b is non-zero and c is not zero. This set is not closed under addition or scalar multiplication. Here is the work showing that it’s not closed under addition. Let ( ) 1 1 , x y = u and ( ) 2 2 , x y = v be any two points from V (and so they satisfy the equation above). Then, ( ) ( ) ( ) ( ) 1 2 1 2 1 1 2 2 2 a x x b y y ax by ax by c c c c + + + = + + + = + = ≠ So the sum, ( ) 1 2 1 2 , x x y y + = + + u v , does not satisfy the equation and hence is not in V and so V is not closed under addition. We’ll leave it to you to verify that this particular V is not closed under scalar multiplication. Also, note that since we are working on a set of points from 2 \ with the standard addition then the zero vector must be ( ) 0,0 = 0 , but because this doesn’t satisfy the equation it is not in V and so axiom (e) is also not satisfied. In order for V to be a vector space it must contain the zero vector 0! You should go through the remaining axioms and see if there are any others that fail. Before moving on we should note that prior to this example all the sets that have not been vector spaces we’ve not been operating under the standard addition and/or scalar multiplication. In this example we’ve now seen that for some sets under the standard addition and scalar multiplication will not be vector spaces either. Linear Algebra © 2007 Paul Dawkins 190 Example 8 Let the set V be the points on a plane through the origin in 3 \ with the standard addition and scalar multiplication. Then V is a vector space. The equation of a plane through the origin in 3 \ is, 0 ax by cz + + = where a, b, and c are fixed constants and at least one is not zero. Given the equation you can (hopefully) see that this will work in pretty much the same manner as the Example 6 and so we won’t show any work here. Okay, we’ve seen quite a few examples to this point, but they’ve all involved sets that were some or all of n \ and so we now need to see a couple of examples of vector spaces whose elements (and hence the “vectors” of the set) are not points in n \ . Example 9 Let n and m be fixed numbers and let nm M represent the set of all n m × matrices. Also let addition and scalar multiplication on nm M be the standard matrix addition and standard matrix scalar multiplication. Then nm M is a vector space. If we let c be any scalar and let the “vectors” u and v represent any two n m × matrices (i.e. they are both objects in nm M ) then we know from our work in the first chapter that the sum, + u v , and the scalar multiple, cu, are also n m × matrices and hence are in nm M . So nm M is closed under addition and scalar multiplication. Next, if we define the zero vector, 0, to be the n m × zero matrix and if the “vector” u is some n m × , A, we can define the negative, −u , to be the matrix -A then the properties of matrix arithmetic will show that the remainder of the axioms are valid. Therefore, nm M is a vector space. Note that this example now gives us a whole host of new vector spaces. For instance, the set of 2 2 × matrices, 22 M , is a vector space and the set of all 5 9 × matrices, 59 M , is a vector space, etc. Also, the “vectors” in this vector space are really matrices! Here’s another important example that may appear to be even stranger yet. Example 10 Let [ ] , F a b be the set of all real valued functions that are defined on the interval [ ] , a b . Then given any two “vectors”, ( ) f x = f and ( ) g x = g , from [ ] , F a b and any scalar c define addition and scalar multiplication as, ( )( ) ( ) ( ) ( )( ) ( ) x f x g x c x cf x + = + = f g f Under these operations [ ] , F a b is a vector space. Linear Algebra © 2007 Paul Dawkins 191 By assumption both f and g are real valued and defined on [ ] , a b . Then, for both addition and scalar multiplication we just going to plug x into both ( ) f x and/or ( ) g x and both of these are defined and so the sum or the product with a scalar will also be defined and so this space is closed under addition and scalar multiplication. The “zero vector”, 0, for [ ] , F a b is the zero function, i.e. the function that is zero for all x, and the negative of the “vector” f is the “vector” ( ) f x −= − f . We should make a couple of quick comments about this vector space before we move on. First, recall that the [ ] , a b represents the interval a x b ≤ ≤ (i.e. we include the endpoints). We could also look at the set ( ) , F a b which is the set of all real valued functions that are defined on ( ) , a b ( a x b < < , no endpoints) or ( ) , F −∞∞ the set of all real valued functions defined on ( ) , −∞∞ and we’ll still have a vector space. Also, depending upon the interval we choose to work with we may have a different set of functions in the set. For instance, the function 1 x would be in [ ] 2,10 F but not in [ ] 3,6 F − because of division by zero. In this case the “vectors” are now functions so again we need to be careful with the term vector. In can mean a lot of different things depending upon what type of vector space we’re working with. Both of the vector spaces from Examples 9 and 10 are fairly important vector spaces and as we’ll look at them again in the next section where we’ll see some examples of some related vector spaces. There is one final example that we need to look at in this section. Example 11 Let V consist of a single object, denoted by 0, and define 0 0 c + = = 0 0 0 Then V is a vector space and is called the zero vector space. The last thing that we need to do in this section before moving on is to get a nice set of facts that fall pretty much directly from the axioms and will be true for all vector spaces. Theorem 1 Suppose that V is a vector space, u is a vector in V and c is any scalar. Then, (a) 0 = u 0 (b) c = 0 0 (c) ( ) 1 − = − u u (d) If c = u 0 then either 0 c = or = u 0 Linear Algebra © 2007 Paul Dawkins 192 The proofs of these are really quite simple, but they only appear that way after you’ve seen them. Coming up with them on your own can be difficult sometimes. We’ll give the proof for two of them and you should try and prove the other two on your own. Proof : (a) Now, this can seem tricky, but each of these steps will come straight from a property of real numbers or one of the axioms above. We’ll start with 0u and use the fact that we can always write 0 0 0 = + and then we’ll use axiom (h). ( ) 0 0 0 0 0 = + = + u u u u This may have seemed like a silly and/or strange step, but it was required. We couldn’t just add a 0u onto one side because this would, in essence, be using the fact that 0 = u 0 and that’s what we’re trying to prove! So, while we don’t know just what 0u is as a vector, it is in the vector space and so we know from axiom (f) that it has a negative which we’ll denote by -0u. Add the negative to both sides and then use axiom (f) again to say that ( ) 0 0 + − = u u 0 ( ) ( ) 0 0 0 0 0 0 + − = + + − = + u u u u u 0 u 0 Finally, use axiom (e) on the right side to get, 0 = 0 u and we’ve proven (a). (c) In this case if we can show that ( ) 1 + − = u u 0 then from axiom (f) we’ll know that ( ) 1 − u is the negative of u, or in other words that ( ) 1 − = − u u . This isn’t too hard to show. We’ll start with ( ) 1 + − u u and use axiom (j) to rewrite the first u as follows, ( ) ( ) 1 1 1 + − = + − u u u u Next, use axiom (h) on the right side and then a nice property of real numbers. ( ) ( ) ( ) 1 1 1 0 + − = + − = u u u u Finally, use part (a) of this theorem on the right side and we get, ( ) 1 + − = u u 0 Linear Algebra © 2007 Paul Dawkins 193 Subspaces Let’s go back to the previous section for a second and examine Example 1 and Example 6. In Example 1 we saw that n \ was a vector space with the standard addition and scalar multiplication for any positive integer n. So, in particular 2 \ is a vector space with the standard addition and scalar multiplication. In Example 6 we saw that the set of points on a line through the origin in 2 \ with the standard addition and vector space multiplication was also a vector space. So, just what is so important about these two examples? Well first notice that they both are using the same addition and scalar multiplication. In and of itself that isn’t important, but it will be important for the end result of what we want to discus here. Next, the set of points in the vector space of Example 6 are also in the set of points in the vector space of Example 1. While it’s not important to the discussion here note that the opposite isn’t true, given a line we can find points in 2 \ that aren’t on the line. What we’ve seen here is that, at least for some vector spaces, it is possible to take certain subsets of the original vector space and as long as we retain the definition of addition and scalar multiplication we will get a new vector space. Of course, it is possible for some subsets to not be a new vector space. To see an example of this see Example 7 from the previous section. In that example we’ve got a subset of 2 \ with the standard addition and scalar multiplication and yet it’s not a vector space. We want to investigate this idea in more detail and we’ll start off with the following definition. Definition 1 Suppose that V is a vector space and W is a subset of V. If, under the addition and scalar multiplication that is defined on V, W is also a vector space then we call W a subspace of V. Now, technically if we wanted to show that a subset W of a vector space V was a subspace we’d need to show that all 10 of the axioms from the definition of a vector space are valid, however, in reality that doesn’t need to be done. Many of the axioms (c, d, g, h, i, and j) deal with how addition and scalar multiplication work, but W is inheriting the definition of addition and scalar multiplication from V. Therefore, since elements of W are also elements of V the six axioms listed above are guaranteed to be valid on W. The only ones that we really need to worry about are the remaining four, all of which require something to be in the subset W. The first two (a, and b) are the closure axioms that require that the sum of any two elements from W is back in W and that the scalar multiple of any element from W will be back in W. Note that the sum and scalar multiple will be in V we just don’t know if it will be in W. We also need to verify that the zero vector (axiom e) is in W and that each element of W has a negative that is also in W (axiom f). As the following theorem shows however, the only two axioms that we really need to worry about are the two closure axioms. Once we have those two axioms valid, we will get the zero vector and negative vector for free. Linear Algebra © 2007 Paul Dawkins 194 Theorem 1 Suppose that W is a non-empty (i.e. at least one element in it) subset of the vector space V then W will be a subspace if the following two conditions are true. (a) If u and v are in W then + u v is also in W (i.e. W is closed under addition). (b) If u is in W and c is any scalar then cu is also in W (i.e. W is closed under scalar multiplication). Where the definition of addition and scalar multiplication on W are the same as on V. Proof : To prove this theorem all we need to do is show that if we assume the two closure axioms are valid the other 8 axioms will be given to us for free. As we discussed above the axioms c, d, g, h, i, and j are true simply based on the fact that W is a subset of V and it uses the same addition and scalar multiplication and so we get these for free. We only need to verify that assuming the two closure condition we get axioms e and f as well. From the second condition above we see that we are assuming that W is closed under scalar multiplication and so both 0u and ( ) 1 − u must be in W, but from Theorem 1 from the previous section we know that, ( ) 0 1 = − = − u 0 u u But this means that the zero vector and the negative of u must be in W and so we’re done. Be careful with this proof. On the surface it may look like we never used the first condition of closure under addition and we didn’t use that to show that axioms e and f were valid. However, in order for W to be a vector space it must be closed under addition and so without that first condition we can’t know whether or not W is in fact a vector space. Therefore, even though we didn’t explicitly use it in the proof it was required in order to guarantee that we have a vector space. Next we should acknowledge the following fact. Fact Every vector space, V, has at least two subspaces. Namely, V itself and { } W = 0 (the zero space). Because V can be thought of as a subset of itself we can also think of it as a subspace of itself. Also, the zero space which is the vector space consisting only of the zero vector, { } W = 0 is a subset of V and is a vector space in its own right and so will be a subspace of V. At this point we should probably take a look at some examples. In all of these examples we assume that the standard addition and scalar multiplication are being used in each case unless otherwise stated. Linear Algebra © 2007 Paul Dawkins 195 Example 1 Determine if the given set is a subspace of the given vector space. (a) Let W be the set of all points, ( ) , x y , from 2 \ in which 0 x ≥ . Is this a subspace of 2 \ ? [Solution] (b) Let W be the set of all points from 3 \ of the form ( ) 2 3 0, , x x . Is this a subspace of 3 \ ? [Solution] (c) Let W be the set of all points from 3 \ of the form ( ) 2 3 1, , x x . Is this a subspace of 3 \ ? [Solution] Solution In each of these cases we need to show either that the set is closed under addition and scalar multiplication or it is not closed for at least one of those. (a) Let W be the set of all points, ( ) , x y , from 2 \ in which 0 x ≥ . Is this a subspace of 2 \ ? This set is closed under addition because, ( ) ( ) ( ) 1 1 2 2 1 2 1 2 , , , x y x y x x y y + = + + and since 1 2 , 0 x x ≥ we also have 1 2 0 x x + ≥ and so the resultant point is back in W. However, this set is not closed under scalar multiplication. Let c be any negative scalar and further assume that 0 x > then, ( ) ( ) , , c x y cx cy = Then because 0 x > and 0 c < we must have 0 cx < and so the resultant point is not in W because the first component is neither zero nor positive. Therefore, W is not a subspace of V. [Return to Problems] (b) Let W be the set of all points from 3 \ of the form ( ) 2 3 0, , x x . Is this a subspace of 3 \ ? This one is fairly simple to check a point will be in W if the first component is zero. So, let ( ) 2 3 0, , x x = x and ( ) 2 3 0, , y y = y be any two points in W and let c be any scalar then, ( ) ( ) ( ) ( ) 2 3 2 3 2 2 3 3 2 3 0, , 0, , 0, , 0, , x x y y x y x y c cx cx + = + = + + = x y x So, both + x y and cx are in W and so W is closed under addition and scalar multiplication and so W is a subspace. [Return to Problems] (c) Let W be the set of all points from 3 \ of the form ( ) 2 3 1, , x x . Is this a subspace of 3 \ ? This one is here just to keep us from making any assumptions based on the previous part. This set is closed under neither addition nor scalar multiplication. In order for points to be in W in this Linear Algebra © 2007 Paul Dawkins 196 case the first component must be a 1. However, if ( ) 2 3 1, , x x = x and ( ) 2 3 1, , y y = y be any two points in W and let c be any scalar other than 1 we get, ( ) ( ) ( ) ( ) 2 3 2 3 2 2 3 3 2 3 1, , 1, , 2, , , , x x y y x y x y c c cx cx + = + = + + = x y x Neither of which is in W and so W is not a subspace. [Return to Problems] Example 2 Determine if the given set is a subspace of the given vector space. (a) Let W be the set of diagonal matrices of size n n × . Is this a subspace of nn M ? [Solution] (b) Let W be the set of matrices of the form 12 21 22 31 32 0 a a a a a ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ .Is this a subspace of 32 M ? [Solution] (c) Let W be the set of matrices of the form 12 22 2 0 a a ⎡ ⎤ ⎢ ⎥ ⎣ ⎦ .Is this a subspace of 22 M ? [Solution] Solution (a) Let W be the set of diagonal matrices of size n n × . Is this a subspace of nn M ? Let u and v be any two n n × diagonal matrices and c be any scalar then, 1 1 1 1 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 n n n n u v u v u v u v u v u v + ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ + = + = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ + ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ u v " " " " " " # # % # # # % # # # % # " " " 1 2 0 0 0 0 0 0 n cu cu c cu ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ u " " # # % # " Both + u v and cu are also diagonal n n × matrices and so W is closed under addition and scalar multiplication and so is a subspace of nn M . [Return to Problems] (b) Let W be the set of matrices of the form 12 21 22 31 32 0 a a a a a ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ .Is this a subspace of 32 M ? Let u and v be any two matrices from W and c be any scalar then, Linear Algebra © 2007 Paul Dawkins 197 12 12 12 12 21 22 21 22 21 21 22 22 31 32 31 32 31 31 32 32 0 0 0 u v u v u u v v u v u v u u v v u v u v + ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ + = + = + + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ + + ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ u v 12 21 22 31 32 0 cu c cu cu cu cu ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ u Both + u v and cu are also in W and so W is closed under addition and scalar multiplication and hence is a subspace of 32 M . [Return to Problems] (c) Let W be the set of matrices of the form 12 22 2 0 a a ⎡ ⎤ ⎢ ⎥ ⎣ ⎦ .Is this a subspace of 22 M ? Let u and v be any two matrices from W then, 12 12 12 12 22 22 22 22 2 2 4 0 0 0 u v u v u v u v + ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ + = + = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ + ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ u v So, + u v isn’t in W since the entry in the first row and first column isn’t a 2. Therefore, W is not closed under addition. You should also verify for yourself that W is not closed under scalar multiplication either. In either case W is not a subspace of 22 M . [Return to Problems] Do not read too much into the result from part (c) of this example. In general the set of upper triangular n n × matrices (without restrictions, unlike part (c) from above) is a subspace of nn M and the set of lower triangular n n × matrices is also a subspace of nn M . You should verify this for the practice. Example 3 Determine if the given set is a subspace of the given vector space. (a) Let [ ] , C a b be the set of all continuous functions on the interval [ ] , a b . Is this a subspace of [ ] , F a b , the set of all real valued functions on the interval [ ] , a b . [Solution] (b) Let n P be the set of all polynomials of degree n or less. Is this a subspace of [ ] , F a b ? [Solution] (c) Let W be the set of all polynomials of degree exactly n. Is this a subspace of [ ] , F a b ? [Solution] (d) Let W be the set of all functions such that ( ) 6 10 f = . Is this a subspace of [ ] , F a b where we have 6 a b ≤ ≤ ? [Solution] Linear Algebra © 2007 Paul Dawkins 198 Solution (a) Let [ ] , C a b be the set of all continuous functions on the interval [ ] , a b . Is this a subspace of [ ] , F a b , the set of all real valued functions on the interval [ ] , a b . Okay, if you’ve not had Calculus you may not know what a continuous function is. A quick and dirty definition of continuity (not mathematically correct, but useful if you haven’t had Calculus) is that a function is continuous on [ ] , a b if there are no holes or breaks in the graph. Put in another way. You can sketch the graph of the function from a to b without ever picking up your pencil of pen. A fact from Calculus (which if you haven’t had please just believe this) is that the sum of two continuous functions is continuous and multiplying a continuous function by a constants will give a new continuous function. So, what this fact tells us is that the set of continuous functions is closed under standard function addition and scalar multiplication and that is what we’re working with here. So, [ ] , C a b is a subspace of [ ] , F a b . [Return to Problems] (b) Let n P be the set of all polynomials of degree n or less. Is this a subspace of [ ] , F a b ? First recall that a polynomial is said to have degree n if its largest exponent is n. Okay, let 1 0 n n a x a x a = + + + u " and 1 0 n n b x b x b = + + + v " and let c be any scalar. Then, ( ) ( ) 1 1 0 0 1 0 n n n n n a b x a b x a b c ca x ca x ca + = + + + + + + = + + + u v u " " In both cases the degree of the new polynomial is not greater than n. Of course in the case of scalar multiplication it will remain degree n, but with the sum, it is possible that some of the coefficients cancel out to zero and hence reduce the degree of the polynomial. The point is that n P is closed under addition and scalar multiplication and so will be a subspace of [ ] , F a b . [Return to Problems] (c) I Let W be the set of all polynomials of degree exactly n. Is this a subspace of [ ] , F a b ? n this case W is not closed under addition. To see this let’s take a look at the 2 n = case to keep things simple (the same argument will work for other values of n) and consider the following two polynomials, 2 2 ax bx c ax dx e = + + = − + + u v where a is not zero, we know this is true because each polynomial must have degree 2. The other constants may or may not be zero. Both are polynomials of exactly degree 2 (since a is not zero) and if we add them we get, ( ) b d x c e + = + + + u v Linear Algebra © 2007 Paul Dawkins 199 So, the sum had degree 1 and so is not in W. Therefore for 2 n = W is not closed under addition. We looked at 2 n = only to make it somewhat easier to write down the two example polynomials. We could just have easily done the work for general n and we’d get the same result and so W is not a subspace. [Return to Problems] (d) Let W be the set of all functions such that ( ) 6 10 f = . Is this a subspace of [ ] , F a b where we have 6 a b ≤ ≤ ? First notice that if we don’t have 6 a b ≤ ≤ then this problem makes no sense, so we will assume that 6 a b ≤ ≤ . In this case suppose that we have two elements from W, ( ) f x = f and ( ) g x = g . This means that ( ) 6 10 f = and ( ) 6 10 g = . In order for W to be a subspace we’ll need to show that the sum and a scalar multiple will also be in W. In other words, if we evaluate the sum or the scalar multiple at 6 we’ll get a result of 10. However, this won’t happen. Let’s take a look at the sum. The sum is, ( )( ) ( ) ( ) 6 6 6 10 10 20 10 f g + = + = + = ≠ f g and so the sum will not be in W. Likewise, if c is any scalar that isn’t 1 we’ll have, ( )( ) ( ) ( ) 6 6 10 10 c cf c = = ≠ f and so the scalar is not in W either. Therefore W is not closed under addition or scalar multiplication and so is not a subspace. [Return to Problems] Before we move on let’s make a couple of observations about some of the sets we looked at in this example. First, we should just point out that the set of all continuous functions on the interval [ ] , a b , [ ] , C a b , is a fairly important vector space in its own right to many areas of mathematical study. Next, we saw that the set of all polynomials of degree less then or equal to n, n P , was a subspace of [ ] , F a b . However, if you’ve had Calculus you’ll know that polynomials are continuous and so n P can also be thought of as a subspace of [ ] , C a b as well. In other words, subspaces can have subspaces themselves. Finally, here is something for you to think about. In the last part we saw that the set of all functions for which ( ) 6 10 f = was not a subspace of [ ] , F a b with 6 a b ≤ ≤ . Let’s take a more general look at this. For some fixed number k let W be the set of all real valued functions for which ( ) 6 f k = . Are there any values of k for which W will be a subspace of [ ] , F a b with 6 a b ≤ ≤ ? Go back and think about how we did the work for that part and that should show you Linear Algebra © 2007 Paul Dawkins 200 that there is one value of k (and only one) for which W will be a subspace. Can you figure out what that number has to be? We now need to look at a fairly important subspace of m \ that we’ll be seeing in future sections. Definition 2 Suppose A is an n m × matrix. The null space of A is the set of all x in m \ such that A = x 0 . Let’s see some examples of null spaces that are easy to find. Example 4 Determine the null space of each of the following matrices. (a) 2 0 4 10 A ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ [Solution] (b) 1 7 3 21 B − ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ [Solution] (c) 0 0 0 0 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ 0 [Solution] Solution (a) 2 0 4 10 A ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ To find the null space of A we’ll need to solve the following system of equations. 1 1 2 1 2 2 0 2 0 0 4 10 0 4 10 0 x x x x x = ⎡ ⎤ ⎡ ⎤ ⎡⎤ = ⇒ ⎢ ⎥ ⎢ ⎥ ⎢⎥ − + = − ⎣ ⎦ ⎣⎦ ⎣ ⎦ We’ve given this in both matrix form and equation form. In equation form it is easy to see that the only solution is 1 2 0 x x = = . In terms of vectors from 2 \ the solution consists of the single vector { } 0 and hence the null space of A is { } 0 . [Return to Problems] (b) 1 7 3 21 B − ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ Here is the system that we need to solve for this part. 1 1 2 2 1 2 7 0 1 7 0 3 21 0 3 21 0 x x x x x x − = − ⎡ ⎤ ⎡ ⎤ ⎡⎤ = ⇒ ⎢ ⎥ ⎢ ⎥ ⎢⎥ − + = − ⎣ ⎦ ⎣⎦ ⎣ ⎦ Now, we can see that these two equations are in fact the same equation and so we know there will be infinitely many solutions and that they will have the form, 1 2 7 is any real number x t x t t = = If you need a refresher on solutions to system take a look at the first section of the first chapter. Linear Algebra © 2007 Paul Dawkins 201 So, the since the null space of B consists of all the solutions to B = x 0. Therefore, the null space of B will consist of all the vectors ( ) 1 2 , x x = x from 2 \ that are in the form, ( ) ( ) 7 , 7,1 is any real number t t t t = = x We’ll see a better way to write this answer in the next section. In terms of equations, rather than vectors in 2 \ , let’s note that the null space of B will be all of the points that are on the equation through the origin give by 1 2 7 0 x x − = . [Return to Problems] (c) 0 0 0 0 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ 0 In this case we’re going to be looking for solutions to 1 2 0 0 0 0 0 0 x x ⎡ ⎤ ⎡ ⎤ ⎡⎤ = ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎣ ⎦ ⎣⎦ ⎣ ⎦ However, if you think about it, every vector x in 2 \ will be a solution to this system since we are multiplying x by the zero matrix. Hence the null space of 0 is all of 2 \ . [Return to Problems] To see some examples of a more complicated null space check out Example 7 from the section on Basis and Example 2 in the Fundamental Subspace section. Both of these examples have more going on in them, but the first step is to write down the null space of a matrix so you can check out the first step of the examples and then ignore the remainder of the examples. Now, let’s go back and take a look at all the null spaces that we saw in the previous example. The null space for the first matrix was { } 0 . For the second matrix the null space was the line through the origin given by 1 2 7 0 x x − = . The null space for the zero matrix was all of 2 \ . Thinking back to the early parts of this section we can see that all of these are in fact subspaces of 2 \ . In fact, this will always be the case as the following theorem shows. Theorem 2 Suppose that A is an n m × matrix then the null space of A will be a subspace of m \ . Proof : We know that the subspace of A consists of all the solution to the system A = x 0 . First, we should point out that the zero vector, 0, in m \ will be a solution to this system and so we know that the null space is not empty. This is a good thing since a vector space (subspace or not) must contain at least one element. Linear Algebra © 2007 Paul Dawkins 202 Now that we know that the null space is not empty let x and y be two elements from the null space and let c be any scalar. We just need to show that the sum and scalar multiple of these are also in the null space and we’ll be done. Let’s start with the sum. ( ) A A A + = + = + = x y x y 0 0 0 The sum, + x y is a solution to A = x 0 and so is in the null space. The null space is therefore closed under addition. Next, let’s take a look at the scalar multiple. ( ) A c c A c = = = x x 0 0 The scalar multiple is also in the null space and so the null space is closed under scalar multiplication. Therefore the null space is a subspace of m \ . Linear Algebra © 2007 Paul Dawkins 203 Span In this section we will cover a topic that we’ll see off and on over the course of this chapter. Let’s start off by going back to part (b) of Example 4 from the previous section. In that example we saw that the null space of the given matrix consisted of all the vectors of the form ( ) ( ) 7 , 7,1 is any real number t t t t = = x We would like a more compact way of stating this result and by the end of this section we’ll have that. Let’s first revisit an idea that we saw quite some time ago. In the section on Matrix Arithmetic we looked at linear combinations of matrices and columns of matrices. We can also talk about linear combinations of vectors. Definition 1 We say the vector w from the vector space V is a linear combination of the vectors 1 2 , , , n v v v … ,all from V, if there are scalars 1 2 , , , n c c c … so that w can be written 1 1 2 2 n n c c c = + + + w v v v " So, we can see that the null space we were looking at above is in fact all the linear combinations of the vector (7,1). It may seem strange to talk about linear combinations of a single vector since that is really scalar multiplication, but we can think of it as that if we need to. The null space above was not the first time that we’ve seen linear combinations of vectors however. When we were looking at Euclidean n-space we introduced these things called the standard basis vectors. The standard basis vectors for n \ were defined as, ( ) ( ) ( ) 1 2 1,0,0, ,0 0,1,0, ,0 0,0,0, ,1 n = = = e e e … … " … We saw that we could take any vector ( ) 1 2 , , , n u u u = u … from n \ and write it as, 1 1 2 2 n n u u u = + + + u e e e " Or, in other words, we could write u and a linear combination of the standard basis vectors, 1 2 , , , n e e e … . We will be revisiting this idea again in a couple of sections, but the point here is simply that we’ve seen linear combinations of vectors prior to us actually discussing them here. Let’s take a look at an example or two. Example 1 Determine if the vector is a linear combination of the two given vectors. (a) Is ( ) 12,20 = − w a linear combination of ( ) 1 1,2 = − v and ( ) 2 4, 6 = − v ? [Solution] (b) Is ( ) 4,20 = w a linear combination of ( ) 1 2,10 = v and ( ) 2 3, 15 = − − v ? [Solution] (c) Is ( ) 1, 4 = − w a linear combination of ( ) 1 2,10 = v and ( ) 2 3, 15 = − − v ? [Solution] Linear Algebra © 2007 Paul Dawkins 204 Solution (a) Is ( ) 12,20 = − w a linear combination of ( ) 1 1,2 = − v and ( ) 2 4, 6 = − v ? In each of these cases we’ll need to set up and solve the following equation, ( ) ( ) ( ) 1 1 2 2 1 2 12,20 1,2 4, 6 c c c c = + − = − + − w v v Then set coefficients equal to arrive at the following system of equations, 1 2 1 2 4 12 2 6 20 c c c c − + = − − = If the system is consistent (i.e. has at least one solution then w is a linear combination of the two vectors. If there is no solution then w is not a linear combination of the two vectors. We’ll leave it to you to verify that the solution to this system is 1 4 c = and 2 2 c = −. Therefore, w is a linear combination of 1 v and 2 v and we can write 1 2 4 2 = − w v v . [Return to Problems] (b) Is ( ) 4,20 = w a linear combination of ( ) 1 2,10 = v and ( ) 2 3, 15 = − − v ? For this part we’ll need to the same kind of thing so here is the system. 1 2 1 2 2 3 4 10 15 20 c c c c − = − = The solution to this system is, 1 2 3 2 is any real number 2 c t c t t = + = This means w is linear combination of 1 v and 2 v . However, unlike the previous part there are literally an infinite number of ways in which we can write the linear combination. So, any of the following combinations would work for instance. ( ) ( ) 1 2 1 2 1 2 1 2 4 2 0 0 3 8 4 2 = + = − = + = − − w v v w v v w v v w v v There are of course many more. There are just a few of the possibilities. [Return to Problems] (c) Is ( ) 1, 4 = − w a linear combination of ( ) 1 2,10 = v and ( ) 2 3, 15 = − − v ? Here is the system we’ll need to solve for this part. 1 2 1 2 2 3 1 10 15 4 c c c c − = − = − This system does not have a solution and so w is not a linear combination of 1 v and 2 v . [Return to Problems] Linear Algebra © 2007 Paul Dawkins 205 So, this example was kept fairly simple, but if we add in more components and/or more vectors to the set the problem will work in essentially the same manner. Now that we’ve seen how linear combinations work and how to tell if a vector is a linear combination of a set of other vectors we need to move into the real topic of this section. In the opening of this section we recalled a null space that we’d looked at in the previous section. We can now see that the null space from that example is nothing more than all the linear combinations of the vector (7,1) (and again, it is kind of strange to be talking about linear combinations of a single vector). As pointed out at the time we’re after a more compact notation for denoting this. It is now time to give that notation. Definition 2 Let { } 1 2 , , , n S = v v v … be a set of vectors in a vector space V and let W be the set of all linear combinations of the vectors 1 2 , , , n v v v … . The set W is the span of the vectors 1 2 , , , n v v v … and is denoted by ( ) { } 1 2 span OR span , , , n W S W = = v v v … We also say that the vectors 1 2 , , , n v v v … span W. So, with this notation we can now see that the null space that we examined at the start of this section is now nothing more than, ( ) { } span 7,1 Before we move on to some examples we should get a nice theorem out of the way. Theorem 1 Let 1 2 , , , n v v v … be vectors in a vector space V and let their span be { } 1 2 span , , , n W = v v v … then, (a) W is a subspace of V. (b) W is the smallest subspace of V that contains all of the vectors 1 2 , , , n v v v … . Proof : (a) So, we need to show that W is closed under addition and scalar multiplication. Let u and w be any two vectors from W. Now, since W is the set of all linear combinations of 1 2 , , , n v v v … that means that both u and w must be a linear combination of these vectors. So, there are scalars 1 2 , , , n c c c … and 1 2 , , , n k k k … so that, 1 1 2 2 1 1 2 2 and n n n n c c c k k k = + + + = + + + u v v v w v v v " " Now, let’s take a look at the sum. ( ) ( ) ( ) 1 1 1 2 2 2 n n n c k c k c k + = + + + + + + u w v v v " So the sum, + u w , is a linear combination of the vectors 1 2 , , , n v v v … and hence must be in W and so W is closed under addition. Linear Algebra © 2007 Paul Dawkins 206 Now, let k be any scalar and let’s take a look at, ( ) ( ) ( ) 1 1 2 2 n n k k c k c k c = + + + u v v v " As we can see the scalar multiple, ku, is a linear combination of the vectors 1 2 , , , n v v v … and hence must be in W and so W is closed under scalar multiplication. Therefore, W must be a vector space. (b) In these cases when we say that W is the smallest vector space that contains the set of vectors 1 2 , , , n v v v … we’re really saying that if W′ is also a vector space that contains 1 2 , , , n v v v … then it will also contain a complete copy of W as well. So, let’s start this off by noticing that W does in fact contain each of the i v ’s since, 1 2 0 0 1 0 i i n = + + + + + v v v v v " " Now, let W′ be a vector space that contains 1 2 , , , n v v v … and consider any vector u from W. If we can show that u must also be in W′ then we’ll have shown that W′ contains a copy of W since it will contain all the vectors in W. Now, u is in W and so must be a linear combination of 1 2 , , , n v v v … , 1 1 2 2 n n c c c = + + + u v v v " Each of the terms in this sum, i i c v , is a scalar multiple of a vector that is in W′ and since W′ is a vector space it must be closed under scalar multiplication and so each i i c v is in W′ . But this means that u is the sum of a bunch of vectors that are in W′ which is closed under addition and so that means that u must in fact be in W′ . We’ve now shown that W′ contains every vector from W and so must contain W itself. Now, let’s take a look at some examples of spans. Example 2 Describe the span of each of the following sets of vectors. (a) ( ) 1 1,0,0 = v and ( ) 2 0,1,0 = v . (b) ( ) 1 1,0,1,0 = v and ( ) 2 0,1,0, 1 = − v Solution (a) The span of this set of vectors, { } 1 2 span , v v , is the set of all linear combinations and we can write down a general linear combination for these two vectors. ( ) ( ) ( ) 1 2 ,0,0 0, ,0 , ,0 a b a b a b + = + = v v So, it looks like { } 1 2 span , v v will be all of the vectors from 3 \ that are in the form ( ) , ,0 a b for any choices of a and b. Linear Algebra © 2007 Paul Dawkins 207 (b) This one is fairly similar to the first one. A general linear combination will look like, ( ) ( ) ( ) 1 2 ,0, ,0 0, ,0, , , , a b a a b b a b a b + = + − = − v v So, { } 1 2 span , v v will be all the vectors from 4 \ of the form ( ) , , , a b a b − for any choices of a and b. Example 3 Describe the span of each of the following sets of “vectors”. (a) 1 1 0 0 0 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v and 2 0 0 0 1 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v (b) 1 1 = v , 2 x = v , and 3 3 x = v Solution These work exactly the same as the previous set of examples worked. The only difference is that this time we aren’t working in n \ for this example. (a) Here is a general linear combination of these “vectors”. 1 2 0 0 0 0 0 0 0 0 a a a b b b ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ + = + = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ v v Here it looks like { } 1 2 span , v v will be all the diagonal matrices in 22 M . (b) A general linear combination in this case is, 3 1 2 3 a b c a bx cx + + = + + v v v In this case { } 1 2 3 span , , v v v will be all the polynomials from 3 P that do not have a quadratic term. Now, let’s see if we can determine a set of vectors that will span some of the common vector spaces that we’ve seen. What we’ll need in each of these examples is a set of vectors with which we can write a general vector from the space as a linear combination of the vectors in the set. Example 4 Determine a set of vectors that will exactly span each of the following vector spaces. (a) n \ [Solution] (b) 22 M [Solution] (c) n P [Solution] Solution Okay, before we start this let’s think about just what we need to show here. We’ll need to find a set of vectors so that the span of that set will be exactly the space given. In other words, we need to show that the span of our proposed set of vectors is in fact the same set as the vector space. So just what do we need to do to mathematically show that two sets are equal? Let’s suppose that we want to show that A and B are equal sets. To so this we’ll need to show that each a in A will be in B and in doing so we’ll have shown that B will at the least contain all of A. Likewise, we’ll need to show that each b in B will be in A and in doing that we’ll have shown that A will contain all of B. However, the only way that A can contain all of B and B can contain all of A is for A and Linear Algebra © 2007 Paul Dawkins 208 B to be the same set. So, for our example we’ll need to determine a possible set of spanning vectors show that every vector from our vector space is in the span of our set of vectors. Next we’ll need to show that each vector in our span will also be in the vector space. (a) n \ We’ve pretty much done this one already. Earlier in the section we showed that the any vector from n \ can be written as a linear combination of the standard basis vectors, 1 2 , , , n e e e … and so at the least the span of the standard basis vectors will contain all of n \ . However, since any linear combination of the standard basis vectors is going to be a vector in n \ we can see that n \ must also contain the span of the standard basis vectors. Therefore, the span of the standard basis vectors must be n \ . [Return to Problems] (b) 22 M We can use result of Example 3(a) above as a guide here. In that example we saw a set of matrices that would span all the diagonal matrices in 22 M and so we can do a natural extension to get a set that will span all of 22 M . It looks like the following set should do it. 1 2 3 4 1 0 0 0 0 1 0 0 , , , 0 0 1 0 0 0 0 1 ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ = = = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ v v v v Clearly any linear combination of these four matrices will be a 2 2 × matrix and hence in 22 M and so the span of these matrices must be contained in 22 M . Likewise, given any matrix from 22 M , a c A b d ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ we can write it as the following linear combination of these “vectors”. 1 2 3 4 a c A a b c d b d ⎡ ⎤ = = + + + ⎢ ⎥ ⎣ ⎦ v v v v and so 22 M must be contained in the span of these vectors and so these vectors will span 22 M . [Return to Problems] (c) n P We can use Example 3(b) to help with this one. First recall that n P is the set of all polynomials of degree n or less. Using Example 3(b) as a guide it looks like the following set of “vectors” will work for us. Linear Algebra © 2007 Paul Dawkins 209 2 0 1 2 1, , , , n n x x x = = = = v v v v … Note that used subscripts that matched the degree of the term and so started at 0 v instead of the usual 1 v . It should be clear (hopefully) that a linear combination of these is a polynomial of degree n or less and so will be in n P . Therefore the span of these vectors will be contained in n P . Likewise, we can write a general polynomial of degree n or less, 0 1 n n a a x a x = + + + p " as the following linear combination 0 0 1 1 n n a a a = + + + p v v v " Therefore n P is contained in the span of these vectors and this means that the span of these vectors is exactly n P . [Return to Problems] There is one last idea about spans that we need to discuss and its best illustrated with an example. Example 5 Determine if the following sets of vectors will span 3 \ . (a) ( ) 1 2,0,1 = v , ( ) 2 1,3,4 = − v , and ( ) 3 1,1, 2 = − v . [Solution] (b) ( ) 1 1,2, 1 = − v , ( ) 2 3, 1,1 = − v , and ( ) 3 3,8, 5 = − − v . [Solution] Solution (a) ( ) 1 2,0,1 = v , ( ) 2 1,3,4 = − v , and ( ) 3 1,1, 2 = − v . Okay let’s think about how we’ve got to approach this. Clearly the span of these vectors will be in 3 \ since they are vectors from 3 \ . The real question is whether or not 3 \ will be contained in the span of these vectors, { } 1 2 3 span , , v v v . In the previous example our set of vectors contained vectors that we could easily show this. However, in this case its not so clear. So to answer that question here we’ll do the following. Choose a general vector from 3 \ , ( ) 1 2 3 , , u u u = u , and determine if we can find scalars 1 c , 2 c , and 3 c so that u is a linear combination of the given vectors. Or, ( ) ( ) ( ) ( ) 1 2 3 1 2 3 , , 2,0,1 1,3,4 1,1, 2 u u u c c c = + − + − If we set components equal we arrive at the following system of equations, 1 2 3 1 2 3 2 1 2 3 3 2 3 4 2 c c c u c c u c c c u − + = + = + − = In matrix form this is, Linear Algebra © 2007 Paul Dawkins 210 1 1 2 2 3 3 2 1 1 0 3 1 1 4 2 c u c u c u − ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − ⎣ ⎦⎣ ⎦ ⎣ ⎦ What we need to do is to determine if this system will be consistent (i.e. have at least one solution) for every possible choice of ( ) 1 2 3 , , u u u = u . Nicely enough this is very easy to do if you recall Theorem 9 from the section on Determinant Properties. This theorem tells us that this system will be consistent for every choice of ( ) 1 2 3 , , u u u = u provided the coefficient matrix is invertible and we can check that be doing a quick determinant computation. So, if we denote the coefficient matrix as A we’ll leave it to you to verify that ( ) det 24 A = − . Therefore the coefficient matrix is invertible and so this system will have a solution for every choice of ( ) 1 2 3 , , u u u = u . This in turn tells us that { } 1 2 3 span , , v v v is contained in 3 \ and so we’ve now shown that { } 3 1 2 3 span , , = v v v \ [Return to Problems] (b) ( ) 1 1,2, 1 = − v , ( ) 2 3, 1,1 = − v , and ( ) 3 3,8, 5 = − − v . We’ll do this one a little quicker. As with the first part, let’s choose a general vector ( ) 1 2 3 , , u u u = u form 3 \ and form up the system that we need to solve. We’ll leave it to you to verify that the matrix form of this system is, 1 1 2 2 3 3 1 3 3 2 1 8 1 1 5 c u c u c u − ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦⎣ ⎦ ⎣ ⎦ This system will have a solution for every choice of ( ) 1 2 3 , , u u u = u if the coefficient matrix, A, is invertible. However, in this case we have ( ) det 0 A = (you should verify this) and so the coefficient matrix is not invertible. This in turn tells us that there is at least one choice of ( ) 1 2 3 , , u u u = u for which this system will not have a solution and so ( ) 1 2 3 , , u u u = u cannot be written as a linear combination of these three vectors. Note that there are in fact infinitely many choices of ( ) 1 2 3 , , u u u = u that will not yield solutions! Now, we know that { } 1 2 3 span , , v v v is contained in 3 \ , but we’ve just shown that there is at least one vector from 3 \ that is not contained in { } 1 2 3 span , , v v v and so the span of these three vectors will not be all of 3 \ . [Return to Problems] Linear Algebra © 2007 Paul Dawkins 211 This example has shown us two things. First, it has shown us that we can’t just write down any set of three vectors and expect to get those three vectors to span 3 \ . This is an idea we’re going to be looking at in much greater detail in the next couple of sections. Secondly, we’ve now seen at least two different sets of vectors that will span 3 \ . There are the three vectors from Example 5(a) as well as the standard basis vectors for 3 \ . This tells us that the set of vectors that will span a vector space are not unique. In other words, we can have more that one set of vectors span the same vector space. Linear Algebra © 2007 Paul Dawkins 212 Linear Independence In the previous section we saw several examples of writing a particular vector as a linear combination of other vectors. However, as we saw in Example 1(b) of that section there is sometimes more than one linear combination of the same set of vectors can be used for a given vector. We also saw in the previous section that some sets of vectors, { } 1 2 , , , n S = v v v … , can span a vector space. Recall that by span we mean that every vector in the space can be written as a linear combination of the vectors in S. In this section we’d like to start looking at when it will be possible to express a given vector from a vector space as exactly one linear combinations of the set S. We’ll start this section off with the following definition. Definition 1 Suppose { } 1 2 , , , n S = v v v … is a non-empty set of vectors and form the vector equation, 1 1 2 2 n n c c c + + + = v v v 0 " This equation has at least one solution, namely, 1 0 c = , 2 0 c = , … , 0 n c = . This solution is called the trivial solution. If the trivial solution is the only solution to this equation then the vectors in the set S are called linearly independent and the set is called a linearly independent set. If there is another solution then the vectors in the set S are called linearly dependent and the set is called a linearly dependent set. Let’s take a look at some examples. Example 1 Determine if each of the following sets of vectors are linearly independent or linearly dependent. (a) ( ) 1 3, 1 = − v and ( ) 2 2,2 = − v . [Solution] (b) ( ) 1 12, 8 = − v and ( ) 2 9,6 = − v . [Solution] (c) ( ) 1 1,0,0 = v , ( ) 2 0,1,0 = v , and ( ) 3 0,0,1 = v . [Solution] (d) ( ) 1 2, 2,4 = − v , ( ) 2 3, 5,4 = − v , and ( ) 3 0,1,1 = v . [Solution] Solution To answer the question here we’ll need to set up the equation 1 1 2 2 n n c c c + + + = v v v 0 " for each part, combine the left side into a single vector and the set all the components of the vector equal to zero (since it must be the zero vector, 0). At this point we’ve got a system of equations that we can solve. If we only get the trivial solution the vectors will be linearly independent and if we get more than one solution the vectors will be linearly dependent. (a) ( ) 1 3, 1 = − v and ( ) 2 2,2 = − v . We’ll do this one in detail and then do the remaining parts quicker. We’ll first set up the equation and get the left side combined into a single vector. Linear Algebra © 2007 Paul Dawkins 213 ( ) ( ) ( ) ( ) 1 2 1 2 1 2 3, 1 2,2 3 2 , 2 0,0 c c c c c c − + − = − − + = 0 Now, set each of the components equal to zero to arrive at the following system of equations. 1 2 1 2 3 2 0 2 0 c c c c − = − + = Solving this system gives to following solution (we’ll leave it to you to verify this), 1 2 0 0 c c = = The trivial solution is the only solution and so these two vectors are linearly independent. [Return to Problems] (b) ( ) 1 12, 8 = − v and ( ) 2 9,6 = − v . Here is the vector equation we need to solve. ( ) ( ) 1 2 12, 8 9,6 c c − + − = 0 The system of equations that we’ll need to solve is, 1 2 1 2 12 9 0 8 6 0 c c c c − = − + = and the solution to this system is, 1 2 3 is any real number 4 c t c t t = = We’ve got more than the trivial solution (note however that the trivial solution IS still a solution, there’s just more than that this time) and so these vectors are linearly dependent. [Return to Problems] (c) ( ) 1 1,0,0 = v , ( ) 2 0,1,0 = v , and ( ) 3 0,0,1 = v . The only difference between this one and the previous two are the fact that we now have three vectors out of 3 \ . Here is the vector equation for this part. ( ) ( ) ( ) 1 2 3 1,0,0 0,1,0 0,0,1 c c c + + = 0 The system of equations to solve for this part is, 1 2 3 0 0 0 c c c = = = So, not much solving to do this time. It is clear that the only solution will be the trivial solution and so these vectors are linearly independent. [Return to Problems] Linear Algebra © 2007 Paul Dawkins 214 (d) ( ) 1 2, 2,4 = − v , ( ) 2 3, 5,4 = − v , and ( ) 3 0,1,1 = v . Here is the vector equation for this final part. ( ) ( ) ( ) 1 2 3 2, 2,4 3, 5,4 0,1,1 c c c − + − + = 0 The system of equations that we’ll need to solve here is, 1 2 1 2 3 1 2 3 2 3 0 2 5 0 4 4 0 c c c c c c c c + = − − + = + + = The solution to this system is, 1 2 3 3 1 is any real number 4 2 c t c t c t t = − = = We’ve got more than just the trivial solution and so these vectors are linearly dependent. [Return to Problems] Note that we didn’t really need to solve any of the systems above if we didn’t want to. All we were interested in it was whether or not the system had only the trivial solution or if there were more solutions in addition to the trivial solution. Theorem 9 from the Properties of the Determinant section can help us answer this question without solving the system. This theorem tells us that if the determinant of the coefficient matrix is non-zero then the system will have exactly one solution, namely the trivial solution. Likewise, it can be shown that if the determinant is zero then the system will have infinitely many solutions. Therefore, once the system is set up if the coefficient matrix is square all we really need to do is take the determinant of the coefficient matrix and if it is non-zero the set of vectors will be linearly independent and if the determinant is zero then the set of vectors will be linearly dependent. If the coefficient matrix is not square then we can’t take the determinant and so we’ll not have a choice but to solve the system. This does not mean however, that the actual solution to the system isn’t ever important as we’ll see towards the end of the section. Before proceeding on we should point out that the vectors from part (c) of this were actually the standard basis vectors for 3 \ . In fact the standard basis vectors for n \ , ( ) ( ) ( ) 1 2 1,0,0, ,0 , 0,1,0, ,0 , , 0,0,0, ,1 n = = = e e e … … … … will be linearly independent. The vectors in the previous example all had the same number of components as vectors, i.e. two vectors from 2 \ or three vectors from 3 \ . We should work a couple of examples that does not fit this mold to make sure that you understand that we don’t need to have the same number of vectors as components. Linear Algebra © 2007 Paul Dawkins 215 Example 2 Determine if the following sets of vectors are linearly independent or linearly dependent. (a) ( ) 1 1, 3 = − v , ( ) 2 2,2 = − v and ( ) 3 4, 1 = − v . [Solution] (b) ( ) 1 2,1 = − v , ( ) 2 1, 3 = −− v and ( ) 3 4, 2 = − v . [Solution] (c) ( ) 1 1,1, 1,2 = − v , ( ) 2 2, 2,0,2 = − v and ( ) 3 2, 8,3, 1 = − − v . [Solution] (d) ( ) 1 1, 2,3, 4 = − − v , ( ) 2 1,3,4,2 = − v and ( ) 3 1,1, 2, 2 = − − v . [Solution] Solution These will work in pretty much the same manner as the previous set of examples worked. Again, we’ll do the first part in some detail and then leave it to you to verify the details in the remaining parts. Also, we’ll not be showing the details of solving the systems of equations so you should verify all the solutions for yourself. (a) ( ) 1 1, 3 = − v , ( ) 2 2,2 = − v and ( ) 3 4, 1 = − v . Here is the vector equation we need to solve. ( ) ( ) ( ) ( ) ( ) 1 2 3 1 2 3 1 2 3 1, 3 2,2 4, 1 2 4 , 3 2 0,0 c c c c c c c c c − + − + − = − + − + − = 0 The system of equations that we need to solve is, 1 2 3 1 2 3 2 4 0 3 2 0 c c c c c c − + = − + − = and this has the solution, 1 2 3 3 11 is any real number 2 4 c t c t c t t = = = We’ve got more than the trivial solution and so these vectors are linearly dependent. Note that we didn’t really need to solve this system to know that they were linearly dependent. From Theorem 2 in the solving systems of equations section we know that if there are more unknowns than equations in a homogeneous system then we will have infinitely many solutions. [Return to Problems] (b) ( ) 1 2,1 = − v , ( ) 2 1, 3 = −− v and ( ) 3 4, 2 = − v . Here is the vector equation for this part. ( ) ( ) ( ) 1 2 3 2,1 1, 3 4, 2 c c c − + −− + − = 0 The system of equations we’ll need to solve is, 1 2 3 1 2 3 2 4 0 3 2 0 c c c c c c − − + = − − = Now, technically we don’t need to solve this system for the same reason we really didn’t need to solve the system in the previous part. There are more unknowns than equations so the system Linear Algebra © 2007 Paul Dawkins 216 will have infinitely many solutions (so more than the trivial solution) and therefore the vectors will be linearly dependent. However, let’s solve anyway since there is an important idea we need to see in this part. Here is the solution. 1 2 3 2 0 is any real number c t c c t t = = = In this case one of the scalars was zero. There is nothing wrong with this. We still have solutions other than the trivial solution and so these vectors are linearly dependent. Note that was it does say however, is that 1 v and 3 v are linearly dependent themselves regardless of 2 v . [Return to Problems] (c) ( ) 1 1,1, 1,2 = − v , ( ) 2 2, 2,0,2 = − v and ( ) 3 2, 8,3, 1 = − − v . Here is the vector equation for this part. ( ) ( ) ( ) 1 2 3 1,1, 1,2 2, 2,0,2 2, 8,3, 1 c c c − + − + − − = 0 The system of equations that we’ll need to solve this time is, 1 2 3 1 2 3 1 3 1 2 3 2 2 0 2 8 0 3 0 2 2 0 c c c c c c c c c c c + + = − − = − + = + − = The solution to this system is, 1 2 3 5 3 is any real number 2 c t c t c t t = = − = We’ve got more solutions than the trivial solution and so these three vectors are linearly dependent. [Return to Problems] (d) ( ) 1 1, 2,3, 4 = − − v , ( ) 2 1,3,4,2 = − v and ( ) 3 1,1, 2, 2 = − − v . The vector equation for this part is, ( ) ( ) ( ) 1 2 3 1, 2,3, 4 1,3,4,2 1,1, 2, 2 c c c − − + − + − − = 0 The system of equations is, 1 2 3 1 2 3 1 2 3 1 2 3 0 2 3 0 3 4 2 0 4 2 2 0 c c c c c c c c c c c c − + = − + + = + − = − + − = This system has only the trivial solution and so these three vectors are linearly independent. [Return to Problems] We should make one quick remark about part (b) of this problem. In this case we had a set of three vectors and one of the scalars was zero. This will happen on occasion and as noted this only means that the vectors with the zero scalars are not really required in order to make the set Linear Algebra © 2007 Paul Dawkins 217 linearly dependent. This part has shown that if you have a set of vectors and a subset is linearly dependent then the whole set will be linearly dependent. Often the only way to determine if a set of vectors is linearly independent or linearly dependent is to set up a system as above and solve it. However, there are a couple of cases were we can get the answer just be looking at the set of vectors. Theorem 1 A finite set of vectors that contains the zero vector will be linearly dependent. Proof : This is a fairly simply proof. Let { } 1 2 , , , , n S = 0 v v v … be any set of vectors that contains the zero vector as shown. We can then set up the following equation. ( ) 1 2 1 0 0 0 n + + + + = 0 v v v 0 " We can see from this that we have a non-trivial solution to this equation and so the set of vectors is linearly dependent. Theorem 2 Suppose that { } 1 2 , , , k S = v v v … is a set of vectors in n \ . If k n > then the set of vectors is linearly dependent. We’re not going to prove this one but we will outline the basic proof. In fact, we saw how to prove this theorem in parts (a) and (b) from Example 2. If we set up the system of equations corresponding to the equation, 1 1 2 2 k k c c c + + + = v v v 0 " we will get a system of equation that has more unknowns than equations (you should verify this) and this means that the system will infinitely many solutions. The vectors will therefore be linearly dependent. To this point we’ve only seen examples of linear independence/dependence with sets of vectors in n \ . We should now take a look at some examples of vectors from some other vector spaces. Example 3 Determine if the following sets of vectors are linearly independent or linearly dependent. (a) 1 1 0 0 0 0 0 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v , 2 0 0 1 0 0 0 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v , and 3 0 0 0 0 1 0 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v . [Solution] (b) 1 1 2 0 1 ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ v and 2 4 1 0 3 ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ v . [Solution] (c) 1 8 2 10 0 − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v and 1 12 3 15 0 − ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ v . [Solution] Solution Okay, the basic process here is pretty much the same as the previous set of examples. It just may not appear that way at first however. We’ll need to remember that this time the zero vector, 0, is in fact the zero matrix of the same size as the vectors in the given set. Linear Algebra © 2007 Paul Dawkins 218 (a) 1 1 0 0 0 0 0 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v , 2 0 0 1 0 0 0 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v , and 3 0 0 0 0 1 0 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v . We’ll first need to set of the “vector” equation, 1 1 2 2 3 3 1 2 3 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 c c c c c c + + = ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ + + = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ v v v 0 Next, combine the “vectors” (okay, they’re matrices so let’s call them that….) on the left into a single matrix using basic matrix scalar multiplication and addition. 1 2 3 0 0 0 0 0 0 0 0 0 c c c ⎡ ⎤ ⎡ ⎤ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ Now, we need both sides to be equal. This means that the three entries in the matrix on the left that are not already zero need to be set equal to zero. This gives the following system of equations. 1 2 3 0 0 0 c c c = = = Of course this isn’t really much of a system as it tells us that we must have the trivial solution and so these matrices (or vectors if you want to be exact) are linearly independent. [Return to Problems] (b) 1 1 2 0 1 ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ v and 2 4 1 0 3 ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ v . So we can see that for the most part these problems work the same way as the previous problems did. We just need to set up a system of equations and solve. For the remainder of these problems we’ll not put in the detail that did in the first part. Here is the vector equation we need to solve for this part. 1 2 1 2 4 1 0 0 0 1 0 3 0 0 c c ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ + = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ The system of equations we need to solve here is, 1 2 1 2 1 2 4 0 2 0 3 0 c c c c c c + = + = − − = We’ll leave it to you to verify that the only solution to this system is the trivial solution and so these matrices are linearly independent. [Return to Problems] Linear Algebra © 2007 Paul Dawkins 219 (c) 1 8 2 10 0 − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v and 1 12 3 15 0 − ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ v . Here is the vector equation for this part 1 2 8 2 12 3 0 0 10 0 15 0 0 0 c c − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ + = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ and the system of equations is, 1 2 1 2 1 3 8 12 0 2 3 0 10 15 0 c c c c c c − = − + = − = The solution to this system is, 1 2 3 is any real number 2 c t c t t = = So, we’ve got solutions other than the trivial solution and so these vectors are linearly dependent. [Return to Problems] Example 4 Determine if the following sets of vectors are linearly independent or linearly dependent. (a) 1 1 = p , 2 x = p , and 2 3 x = p in 2 P . [Solution] (b) 1 3 x = − p , 2 2 2 x x = + p , and 2 3 1 x = + p in 2 P . [Solution] (c) 2 1 2 7 x x = − + p , 2 2 4 2 x x = + + p , and 2 3 2 4 x x = − + p in 2 P . [Solution] Solution Again, these will work in essentially the same manner as the previous problems. In this problem set the zero vector, 0, is the zero function. Since we’re actually working in 2 P for all these parts we can think of this as the following polynomial. 2 0 0 0 x x = + + 0 In other words a second degree polynomial with zero coefficients. (a) 1 1 = p , 2 x = p , and 2 3 x = p in 2 P . Let’s first set up the equation that we need to solve. ( ) 1 1 2 2 3 3 2 2 1 2 3 1 0 0 0 c c c c c x c x x x + + = + + = + + p p p 0 Now, we could set up a system of equations here, however we don’t need to. In order for these two second degree polynomials to be equal the coefficient of each term must be equal. At this point is it should be pretty clear that the polynomial on the left will only equal if all the coefficients of the polynomial on the left are zero. So, the only solution to the vector equation will be the trivial solution and so these polynomials (or vectors if you want to be precise) are linearly independent. [Return to Problems] Linear Algebra © 2007 Paul Dawkins 220 (b) 1 3 x = − p , 2 2 2 x x = + p , and 2 3 1 x = + p in 2 P . The vector equation for this part is, ( ) ( ) ( ) ( ) ( ) ( ) 2 2 2 1 2 3 2 2 2 3 1 2 1 3 3 2 1 0 0 0 2 3 0 0 0 c x c x x c x x x c c x c c x c c x x − + + + + = + + + + + + − + = + + Now, as with the previous part the coefficients of each term on the left must be zero in order for this polynomial to be the zero vector. This leads to the following system of equations. 2 3 1 2 1 3 0 2 0 3 0 c c c c c c + = + = − + = The only solution to this system is the trivial solution and so these polynomials are linearly independent. [Return to Problems] (c) 2 1 2 7 x x = − + p , 2 2 4 2 x x = + + p , and 2 3 2 4 x x = − + p in 2 P . In this part the vector equation is, ( ) ( ) ( ) ( ) ( ) ( ) 2 2 2 2 1 2 3 2 2 1 2 3 1 2 3 1 2 3 2 7 4 2 2 4 0 0 0 2 4 2 7 2 4 0 0 0 c x x c x x c x x x x c c c x c c c x c c c x x − + + + + + − + = + + + + + − + − + + + = + + The system of equation we need to solve is, 1 2 3 1 2 3 1 2 3 2 0 4 2 0 7 2 4 0 c c c c c c c c c + + = − + − = + + = The solution to this system is, 1 2 3 2 1 is any real number 3 3 c t c t c t t = − = = So, we have more solutions than the trivial solution and so these polynomials are linearly dependent. [Return to Problems] Now that we’ve seen quite a few examples of linearly independent and linearly dependent vectors we’ve got one final topic that we want to discuss in this section. Let’s go back and examine the results of the very first example that we worked in this section and in particular let’s start with the final part. In this part we looked at the vectors ( ) 1 2, 2,4 = − v , ( ) 2 3, 5,4 = − v , and ( ) 3 0,1,1 = v and determined that they were linearly dependent. We did this by solving the vector equation, ( ) ( ) ( ) 1 2 3 2, 2,4 3, 5,4 0,1,1 c c c − + − + = 0 and found that it had the solution, 1 2 3 3 1 is any real number 4 2 c t c t c t t = − = = Linear Algebra © 2007 Paul Dawkins 221 We knew that the vectors were linearly dependent because there were solutions to the equation other than the trivial solution. Let’s take a look at one of them. Say, 1 2 3 3 1 2 2 c c c = − = = In fact, let’s plug these values into the vector equation above. ( ) ( ) ( ) 3 2, 2,4 3, 5,4 2 0,1,1 2 − − + − + = 0 Now, if we rearrange this a little we arrive at, ( ) ( ) ( ) 3 3, 5,4 2, 2,4 2 0,1,1 2 − = − − or, in a little more compact form : 2 1 3 3 2 2 = − v v v . So, we were able to write one of the vectors as a linear combination of the other two. Notice as well that we could have just as easily written 1 v and a linear combination of 2 v and 3 v or 3 v as a linear combination of 1 v and 2 v if we’d wanted to. Let’s see if we can do this with the three vectors from the third part of this example. In this part we were looking at the three vectors ( ) 1 1,0,0 = v , ( ) 2 0,1,0 = v , and ( ) 3 0,0,1 = v and in that part we determined that these vectors were linearly independent. Let’s see if we can write 1 v and a linear combination of 2 v and 3 v . If we can we’ll be able to find constants 1 c and 2 c that will make the following equation true. ( ) ( ) ( ) ( ) 1 2 1 2 1,0,0 0,1,0 0,0,1 0, , c c c c = + = Now, while we can find values of 1 c and 2 c that will make the second and third entries zero as we need them to we’re in some pretty serious trouble with the first entry. In the vector on the left we’ve got a 1 in the first entry and in the vector on the right we’ve got a 0 in the first entry. So, there is no way we can write the first vector as a linear combination of the other two. You should also verify we also do this in any of the other combinations either. So, what have we seen here with these two examples. With a set of linearly dependent vectors we were able to write at least one of them as a linear combination of the other vectors in the set and with a set of linearly independent vectors we were not able to do this for any of the vectors. This will always be the case. With a set of linearly independent vectors we will never be able to write one of the vectors as a linear combination of the other vectors in the set. On the other hand, if we have a set of linearly dependent vectors then at least one of them can be written as a linear combination of the remaining vectors. In the example of linearly dependent vectors we were looking at above we could write any of the vectors as a linear combination of the others. This will not always be the case, to see this take a look at Example 2(b). In this example we determined that the vectors ( ) 1 2,1 = − v , Linear Algebra © 2007 Paul Dawkins 222 ( ) 2 1, 3 = −− v and ( ) 3 4, 2 = − v were linearly dependent. We also saw that the solution to the equation, ( ) ( ) ( ) 1 2 3 2,1 1, 3 4, 2 c c c − + −− + − = 0 was given by 1 2 3 2 0 is any real number c t c c t t = = = and as we saw above we can always use this to determine how to write at least one of the vectors as a linear combination of the remaining vectors. Simply pick a value of t and the rearrange as you need to. Doing this in our case we see that we can do one of the following. ( ) ( ) ( )( ) ( ) ( )( ) ( ) 4, 2 2 2,1 0 1, 3 1 2,1 0 1, 3 4, 2 2 − = − − − −− − = − −− − − It’s easy in this case to write the first or the third vector as a combination of the other vectors. However, because the coefficient of the second vector is zero, there is no way that we can write the second vector as a linear combination of the first and third vectors. What that means here is that the first and third vectors are linearly dependent by themselves (as we pointed out in that example) but the first and second are linearly independent vectors as are the second and third if we just look at them as a pair of vectors (you should verify this). This can be a useful idea about linearly independent/dependent vectors on occasion. Linear Algebra © 2007 Paul Dawkins 223 Basis and Dimension In this section we’re going to take a look at an important idea in the study of vector spaces. We will also be drawing heavily on the ideas from the previous two sections and so make sure that you are comfortable with the ideas of span and linear independence. We’ll start this section off with the following definition. Definition 1 Suppose { } 1 2 , , , n S = v v v … is a set of vectors from the vector space V. Then S is called a basis (plural is bases) for V if both of the following conditions hold. (a) ( ) span S V = , i.e. S spans the vector space V. (b) S is a linearly independent set of vectors. Let’s take a look at some examples. Example 1 Determine if each of the sets of vectors will be a basis for 3 \ . (a) ( ) 1 1, 1,1 = − v , ( ) 2 0,1,2 = v and ( ) 3 3,0, 1 = − v . [Solution] (b) ( ) 1 1,0,0 = v , ( ) 2 0,1,0 = v and ( ) 3 0,0,1 = v . [Solution] (c) ( ) 1 1,1,0 = v and ( ) 2 1,0,0 = − v . [Solution] (d) ( ) 1 1, 1,1 = − v , ( ) 2 1,2, 2 = − − v and ( ) 3 1,4, 4 = − − v . [Solution] Solution (a) ( ) 1 1, 1,1 = − v , ( ) 2 0,1,2 = v and ( ) 3 3,0, 1 = − v . Now, let’s see what we’ve got to do here to determine whether or not this set of vectors will be a basis for 3 \ . First, we’ll need to show that these vectors span 3 \ and from the section on Span we know that to do this we need to determine if we can find scalars 1 c , 2 c , and 3 c so that a general vector ( ) 1 2 3 , , u u u = u from 3 \ can be expressed as a linear combination of these three vectors or, ( ) ( ) ( ) ( ) 1 2 3 1 2 3 1, 1,1 0,1,2 3,0, 1 , , c c c u u u − + + − = As we saw in the section on Span all we need to do is convert this to a system of equations, in matrix form, and then determine if the coefficient matrix has a non-zero determinant or not. If the determinant of the coefficient matrix is non-zero then the set will span the given vector space and if the determinant of the coefficient matrix is zero then it will not span the given vector space. Recall as well that if the determinant of the coefficient matrix is non-zero then there will be exactly one solution to this system for each u. The matrix form of the system is, 1 1 2 2 3 3 1 0 3 1 1 0 1 2 1 c u c u c u ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − ⎣ ⎦⎣ ⎦ ⎣ ⎦ Linear Algebra © 2007 Paul Dawkins 224 Before we get the determinant of the coefficient matrix let’s also take a look at the other condition that must be met in order for this set to be a basis for 3 \ . In order for these vectors to be a basis for 3 \ then they must be linearly independent. From the section on Linear Independence we know that to determine this we need to solve the following equation, ( ) ( ) ( ) ( ) 1 2 3 1, 1,1 0,1,2 3,0, 1 0,0,0 c c c − + + − = = 0 If this system has only the trivial solution the vectors will be linearly independent and if it has solutions other than the trivial solution then the vectors will be linearly dependent. Note however, that this is really just a specific case of the system that we need to solve for the span question. Namely here we need to solve, 1 2 3 1 0 3 0 1 1 0 0 1 2 1 0 c c c ⎡ ⎤⎡ ⎤ ⎡⎤ ⎢ ⎥⎢ ⎥ ⎢⎥ − = ⎢ ⎥⎢ ⎥ ⎢⎥ ⎢ ⎥⎢ ⎥ ⎢⎥ − ⎣ ⎦⎣ ⎦ ⎣⎦ Also, as noted above, if these vectors will span 3 \ then there will be exactly one solution to the system for each u. In this case we know that the trivial solution will be a solution, our only question is whether or not it is the only solution. So, all that we need to do here is compute the determinant of the coefficient matrix and if it is non-zero then the vectors will both span 3 \ and be linearly independent and hence the vectors will be a basis for 3 \ . On the other hand, if the determinant is zero then the vectors will not span 3 \ and will not be linearly independent and so they won’t be a basis for 3 \ . So, here is the determinant of the coefficient matrix for this problem. ( ) 1 0 3 1 1 0 det 10 0 1 2 1 A A ⎡ ⎤ ⎢ ⎥ = − ⇒ = − ≠ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ So, these vectors will form a basis for 3 \ . [Return to Problems] (b) ( ) 1 1,0,0 = v , ( ) 2 0,1,0 = v and ( ) 3 0,0,1 = v . Now, we could use a similar path for this one as we did earlier. However, in this case, we’ve done all the work for this one in previous sections. In Example 4(a) of the section on Span we determined that the standard basis vectors (Interesting name isn’t it? We’ll come back to this in a bit) 1 e , 2 e and 3 e will span 3 \ . Notice that while we’ve changed the notation a little just for this problem we are working with the standard basis vectors here and so we know that they will span 3 \ . Likewise, in Example 1(c) from the section on Linear Independence we saw that these vectors are linearly independent. Linear Algebra © 2007 Paul Dawkins 225 Hence based on all this previous work we know that these three vectors will form a basis for 3 \ . [Return to Problems] (c) ( ) 1 1,1,0 = v and ( ) 2 1,0,0 = − v . We can’t use the method from part (a) here because the coefficient matrix wouldn’t be square and so we can’t take the determinant of it. So, let’s just start this out be checking to see if these two vectors will span 3 \ . If these two vectors will span 3 \ then for each ( ) 1 2 3 , , u u u = u in 3 \ there must be scalars 1 c and 2 c so that, ( ) ( ) ( ) 1 2 1 2 3 1,1,0 1,0,0 , , c c u u u + − = However, we can see right away that there will be problems here. The third component of the each of these vectors is zero and hence the linear combination will never have any non-zero third component. Therefore, if we choose ( ) 1 2 3 , , u u u = u to be any vector in 3 \ with 3 0 u ≠ we will not be able to find scalars 1 c and 2 c to satisfy the equation above. Therefore, these two vectors do not span 3 \ and hence cannot be a basis for 3 \ . Note however, that these two vectors are linearly independent (you should verify that). Despite this however, the vectors are still not a basis for 3 \ since they do not span 3 \ . (d) ( ) 1 1, 1,1 = − v , ( ) 2 1,2, 2 = − − v and ( ) 3 1,4, 4 = − − v [Return to Problems] In this case we’ve got three vectors with three components and so we can use the same method that we did in the first part. The general equation that needs solved here is, ( ) ( ) ( ) ( ) 1 2 3 1 2 3 1, 1,1 1,2, 2 1,4, 4 , , c c c u u u − + − − + − − = and the matrix form of this is, 1 2 3 1 1 1 0 1 2 4 0 1 2 4 0 c c c − − ⎡ ⎤⎡ ⎤ ⎡⎤ ⎢ ⎥⎢ ⎥ ⎢⎥ − = ⎢ ⎥⎢ ⎥ ⎢⎥ ⎢ ⎥⎢ ⎥ ⎢⎥ − − ⎣ ⎦⎣ ⎦ ⎣⎦ We’ll leave it to you to verify that ( ) det 0 A = and so these three vectors do not span 3 \ and are not linearly independent. Either of which will mean that these three vectors are not a basis for 3 \ . [Return to Problems] Before we move on let’s go back and address something we pointed out in Example 1(b). As we pointed out at the time the three vectors we were looking at were the standard basis vectors for 3 \ . We should discuss the name a little more at this point and we’ll do it a little more generally than in 3 \ . The vectors Linear Algebra © 2007 Paul Dawkins 226 ( ) ( ) ( ) 1 2 1,0,0, ,0 0,1,0, ,0 0,0,0, ,1 n = = = e e e … … " … will span n \ as we saw in the section on Span and it is fairly simple to show that these vectors are linearly independent (you should verify this) and so they form a basis for n \ . In some way this set of vectors is the simplest (we’ll see this in a bit) and so we call them the standard basis vectors for n \ . We also have a set of standard basis vectors for a couple of the other vector spaces we’ve been looking at occasionally. Let’s take a look at each of them. Example 2 The set 0 1 = p , 1 x = p , 2 2 x = p , … , n n x = p is a basis for n P and is usually called the standard basis for n P . In Example 4(c) of the section on Span we showed that this set will span n P . In Example 4(a) of the section on Linear Independence we shows that for 2 n = these form a linearly independent set in 2 P . A similar argument can be used for the general case here and we’ll leave it to you to go through that argument. So, this set of vectors is in fact a basis for n P . Example 3 The set 1 1 0 0 0 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v , 2 0 0 1 0 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v , 3 0 1 0 0 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v , and 4 0 0 0 1 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v is a basis for 22 M and is usually called the standard basis for 22 M . In Example 4(b) of the section on Span we showed that this set will span 22 M . We have yet so show that they are linearly in dependent however. So, following the procedure from the last section we know that we need to set up the following equation, 1 2 3 4 1 3 2 4 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 c c c c c c c c ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ + + + = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡ ⎤ ⎡ ⎤ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ So, the only way the matrix on the left can be the zero matrix is for all the scalars to be zero. In other words, this equation has only the trivial solution and so the matrices are linearly independent. This combined with the fact that they span 22 M shows that they are in fact a basis for 22 M . Note that we only looked at the standard basis vectors for 22 M , but you should be able to modify this appropriately to arrive at a set of standard basis vector for nm M in general. Next let’s take a look at the following theorem that gives us one of the reasons for being interested in a set of basis vectors. Linear Algebra © 2007 Paul Dawkins 227 Theorem 1 Suppose that the set { } 1 2 , , , n S = v v v … is a basis for the vector space V then every vector u from V can be expressed as a linear combination of the vectors from S in exactly one way. Proof : First, since we know that the vectors in S are a basis for V then for any vector u in V we can write it as a linear combination as follows, 1 1 2 2 n n c c c = + + + u v v v " Now, let’s suppose that it is also possible to write it as the following linear combination, 1 1 2 2 n n k k k = + + + u v v v " If we take the difference of these two linear combinations we get, ( ) ( ) ( ) 1 1 1 2 2 2 n n n c k c k c k = − = − + − + + − 0 u u v v v " However, because the vectors in S are a basis they are linearly independent. That means that this equation can only have the trivial solution. Or, in other words, we must have, 1 1 2 2 0, 0, , 0 n n c k c k c k − = − = − = … But this means that, 1 1 2 2 , , , n n c k c k c k = = = … and so the two linear combinations were in fact the same linear combination. We also have the following fact. It probably doesn’t really rise to the level of a theorem, but we’ll call it that anyway. Theorem 2 Suppose that { } 1 2 , , , n S = v v v … is a set of linearly independent vectors then S is a basis for the vector space ( ) span V S = The proof here is so simple that we’re not really going to give it. By assumption the set is linearly independent and by definition V is the span of S and so the set must be a basis for V. We now need to take a look at the following definition. Definition 2 Suppose that V is a non-zero vector space and that S is a set of vectors from V that for a basis for V. If S contains a finite number of vectors, say { } 1 2 , , , n S = v v v … , then we call V a finite dimensional vector space and we say that the dimension of V, denoted by ( ) dim V , is n (i.e. the number of basis elements in S. If V is not a finite dimensional vector space (so S does not have a finite number of vectors) then we call it an infinite dimensional vector space. By definition the dimension of the zero vector space (i.e. the vector space consisting solely of the zero vector) is zero. Here are the dimensions of some of the vector spaces we’ve been dealing with to this point. Linear Algebra © 2007 Paul Dawkins 228 Example 4 Dimensions of some vector spaces. (a) ( ) dim n n = \ since the standard basis vectors for n \ are, ( ) ( ) ( ) 1 2 1,0,0, ,0 0,1,0, ,0 0,0,0, ,1 n = = = e e e … … " … (b) ( ) dim 1 n P n = + since the standard basis vectors for n P are, 2 0 1 2 1 n n x x x = = = = p p p p " (c) ( ) ( )( ) 22 dim 2 2 4 M = = since the standard basis vectors for 22 M are, 1 2 3 4 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ = = = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ v v v v (d) ( ) dim nm M nm = . This follows from the natural extension of the previous part. The set of standard basis vectors will be a set of vectors that are zero in all entries except one entry which is a 1. There are nm possible positions of the 1 and so there must be nm basis vectors (e) The set of real valued functions on a interval, [ ] , F a b , and the set of continuous functions on an interval, [ ] , C a b , are infinite dimensional vector spaces. This is not easy to show at this point, but here is something to think about. If we take all the polynomials (of all degrees) then we can form a set (see part (b) above for elements of that set) that does not have a finite number of elements in it and yet is linearly independent. This set will be in either of the two vector spaces above and in the following theorem we can show that there will be no finite basis set for these vector spaces. We now need to take a look at several important theorems about vector spaces. The first couple of theorems will give us some nice ideas about linearly independent/dependent sets and spans. One of the more important uses of these two theorems is constructing a set of basis vectors as we’ll see eventually. Theorem 3 Suppose that V is a vector space and that { } 1 2 , , , n S = v v v … is any basis for V. (a) If a set has more than n vectors then it is linearly dependent. (b) If a set has fewer than n vectors then it does not span V. Proof : (a) Let { } 1 2 , , , m R = w w w … and suppose that m n > . Since S is a basis of V every vector in R can be written as a linear combination of vectors from S as follows, 1 11 1 21 2 1 2 12 1 22 2 2 1 1 2 2 n n n n m m m nm n a a a a a a a a a = + + + = + + + = + + + w v v v w v v v w v v v " " # " Linear Algebra © 2007 Paul Dawkins 229 Now, we want to show that the vectors in R are linearly dependent. So, we’ll need to show that there are more solutions than just the trivial solution to the following equation. 1 1 2 2 0 m m k k k + + + = w w w " If we plug in the set of linear combinations above for the i w ’s in this equation and collect all the coefficients of the j v ’s we arrive at. ( ) ( ) ( ) 11 1 12 2 1 1 21 1 22 2 2 2 1 1 2 2 m m m m n n nm n a k a k a k a k a k a k a k a k a k + + + + + + + + + + + + = v v v 0 " " " " Now, the j v ’s are linearly independent and so we know that the coefficients of each of the j v in this equation must be zero. This gives the following system of equations. 11 1 12 2 1 21 1 22 2 2 1 1 2 2 0 0 0 m m m m n n nm m a k a k a k a k a k a k a k a k a k + + + = + + + = + + + = " " # " Now, in this system the i j a ’s are known scalars from the linear combinations above and the i k ’s are unknowns. So we can see that there are n equations and m unknowns. However, because m n > there are more unknowns than equations and so by Theorem 2 in the solving systems of equations section we know that if there are more unknowns than equations in a homogeneous system, as we have here, there will be infinitely many solutions. Therefore the equation, 1 1 2 2 0 m m k k k + + + = w w w " will have more solutions than the trivial solution and so the vectors in R must be linearly dependent. (b) The proof of this part is very similar to the previous part. Let’s start with the set { } 1 2 , , , m R = w w w … and this time we’re going to assume that m n < . It’s not so easy to show directly that R will not span V, but if we assume for a second that R does span V we’ll see that we’ll run into some problems with our basis set S. This is called a proof by contradiction. We’ll assume the opposite of what we want to prove and show that this will lead to a contradiction of something that we know is true (in this case that S is a basis for V). So, we’ll assume that R will span V. This means that all the vectors in S can be written as a linear combination of the vectors in R or, 1 11 1 21 2 1 2 12 1 22 2 2 1 1 2 2 m m m m n n n mn m a a a a a a a a a = + + + = + + + = + + + v w w w v w w w v w w w " " # " Let’s now look at the equation, Linear Algebra © 2007 Paul Dawkins 230 1 1 2 2 0 n n k k k + + + = v v v " Now, because S is a basis we know that the i v ’s must be linearly independent and so the only solution to this must be the trivial solution. However, if we substitute the linear combinations of the i v ’s into this, rearrange as we did in part (a) and then setting all the coefficients equal to zero gives the following system of equations. 11 1 12 2 1 21 1 22 2 2 1 1 2 2 0 0 0 n n n n m m mn n a k a k a k a k a k a k a k a k a k + + + = + + + = + + + = " " # " Again, there are more unknowns than equations here and so there are infinitely many solutions. This contradicts the fact that we know the only solution to the equation 1 1 2 2 0 n n k k k + + + = v v v " is the trivial solution. So, our original assumption that R spans V must be wrong. Therefore R will not span V. Theorem 4 Suppose S is a non-empty set of vectors in a vector space V. (a) If S is linearly independent and u is any vector in V that is not in ( ) span S then the set { } R S = ∪u (i.e. the set of S and u) is also a linearly independent set. (b) If u is any vector in S that can be written as a linear combination of the other vectors in S let { } R S = −u be the set we get by removing u from S. Then, ( ) ( ) span span S R = In other words, S and { } S −u will span the same space. Proof : (a) If { } 1 2 , , , n S = v v v … we need to show that the set { } 1 2 , , , , n R = v v v u … is linearly independent. So, let’s form the equation, 1 1 2 2 1 n n n c c c c + + + + + = v v v u 0 " Now, if 1 n c + is not zero we will be able to write u as a linear combination of the i v ’s but this contradicts the fact that u is not in ( ) span S . Therefore we must have 1 0 n c + = and our equation is now, 1 1 2 2 n n c c c + + + = v v v 0 " But the vectors in S are linearly dependent and so the only solution to this is the trivial solution, 1 2 0 0 0 n c c c = = = " So, we’ve shown that the only solution to 1 1 2 2 1 n n n c c c c + + + + + = v v v u 0 " Linear Algebra © 2007 Paul Dawkins 231 is 1 2 1 0 0 0 0 n n c c c c + = = = = " Therefore, the vectors in R are linearly independent. (b) Let’s suppose that our set is { } 1 2 , , , , n S = v v v u … and so we have { } 1 2 , , , n R = v v v … . First, by assumption u is a linear combination of the remaining vectors in S or, 1 1 2 2 n n c c c = + + + u v v v " Next let w be any vector in ( ) span S . So, w can be written as a linear combination of all the vectors in S or, 1 1 2 2 1 n n n k k k k + = + + + + w v v v u " Now plug in the expression for u above to get, ( ) ( ) ( ) ( ) 1 1 2 2 1 1 1 2 2 1 1 1 1 2 1 2 2 1 n n n n n n n n n n n k k k k c c c k k c k k c k k c + + + + = + + + + + + + = + + + + + + w v v v v v v v v v " " " So, w is a linear combination of vectors only in R and so at the least every vector that is in ( ) span S must also be in ( ) span R . Finally, if w is any vector in ( ) span R then it can be written as a linear combination of vectors from R, but since these are also vectors in S we see that w can also, by default, be written as a linear combination of vectors from S and so is also in ( ) span S . We’ve just shown that every vector in ( ) span R must also be in ( ) span S . Since we’ve shown that ( ) span S must be contained in ( ) span R and that every vector in ( ) span R must also be contained in ( ) span S this can only be true if ( ) ( ) span span S R = . We can use the previous two theorems to get some nice ideas about the basis of a vector space. Theorem 5 Suppose that V is a vector space then all the bases for V contain the same number of vectors. Proof : Suppose that { } 1 2 , , , n S = v v v … is a basis for V. Now, let R be any other basis for V. Then by Theorem 3 above if R contains more than n elements it can’t be a linearly independent set and so can’t be a basis. So, we know that, at the least R can’t contain more than n elements. However, Theorem 3 also tells us that if R contains less than n elements then it won’t span V and hence can’t be a basis for V. Therefore the only possibility is that R must contain exactly n elements. Linear Algebra © 2007 Paul Dawkins 232 Theorem 6 Suppose that V is a vector space and that ( ) dim V n = . Also suppose that S is a set that contains exactly n vectors. S will be a basis for V if either { } span V S = or S is linearly independent. Proof : First suppose that { } span V S = . If S is linearly dependent then there must be some vector u in S that can be written as a linear combination of other vectors in S and so by Theorem 4(b) we can remove u from S and our new set of 1 n − vectors will still span V. However, Theorem 3(b) tells us that any set with fewer vectors than a basis (i.e. less than n in this case) can’t span V. Therefore, S must be linearly independent and hence S is a basis for V. Now, let’s suppose that S is linearly independent. If S does not span V then there must be a vector u that is not in ( ) span S . If we add u to S the resulting set with 1 n + vectors must be linearly independent by Theorem 4(a). On the other hand, Theorem 3(a) tells us that any set with more vectors than the basis (i.e. greater than n) can’t be linearly independent. Therefore, S must span V and hence S is a basis for V. Theorem 7 Suppose that V is a finite dimensional vector space with ( ) dim V n = and that S is any finite set of vectors from V. (a) If S spans V but is not a basis for V then it can be reduced to a basis for V by removing certain vectors from S. (b) If S is linearly independent but is not a basis V then it can be enlarged to a basis for V by adding in certain vectors from V. Proof : (a) If S spans V but is not a basis for V then it must be a linearly dependent set. So, there is some vector u in S that can be written as a linear combination of the other vectors in S. Let R be the set that results from removing u from S. Then by Theorem 4(b) R will still span V. If R is linearly independent then we have a basis for V and if it is still linearly dependent we can remove another element to form a new set R′ that will still span V. We continue in this way until we’ve reduced S down to a set of linearly independent vectors and at that point we will have a basis of V. (b) If S is linearly independent but not a basis then it must not span V. Therefore, there is a vector u that is not in ( ) span S . So, add u to S to form the new set R. Then by Theorem 4(a) the set R is still linearly independent. If R now spans V we’ve got a basis for V and if not add another element to form the new linearly independent set R′. Continue in this fashion until we reach a set with n vectors and then by Theorem 6 this set must be a basis for V. Okay we should probably see some examples of some of these theorems in action. Linear Algebra © 2007 Paul Dawkins 233 Example 5 Reduce each of the following sets of vectors to obtain a basis for the given vector space. (a) ( ) 1 1,0,0 = v , ( ) 2 0,1, 1 = − v , ( ) 3 0,4, 3 = − v and ( ) 4 0,2,0 = v for 3 \ . [Solution] (b) 0 2 = p , 1 4x = − p , 2 2 1 x x = + + p , 3 2 7 x = + p and 2 4 5 1 x = − p for 2 P . [Solution] Solution First, notice that provided each of these sets of vectors spans the given vector space Theorem 7(a) tells us that this can in fact be done. (a) ( ) 1 1,0,0 = v , ( ) 2 0,1, 1 = − v , ( ) 3 0,4, 3 = − v and ( ) 4 0,2,0 = v for 3 \ . We will leave it to you to verify that this set of vectors does indeed span 3 \ and since we know that ( ) 3 dim 3 = \ we can see that we’ll need to remove one vector from the list in order to get down to a basis. However, we can’t just remove any of the vectors. For instance if we removed 1 v the set would no longer span 3 \ . You should verify this, but you can also quickly see that only 1 v has a non-zero first component and so will be required for the vectors to span 3 \ . Theorem 4(b) tells us that if we remove a vector that is a linear combination of some of the other vectors we won’t change the span of the set. So, that is what we need to look for. Now, it looks like the last three vectors are probably linearly dependent so if we set up the following equation 1 2 2 3 3 4 c c c + + = v v v 0 and solve it, 1 2 3 6 2 is any real number c t c t c t t = = − = we can see that these in fact are linearly dependent vectors. This means that we can remove any of these since we could write any one of them as a linear combination of the other two. So, let’s remove 3 v for no other reason that the entries in this vector are larger than the others. The following set the still spans 3 \ and has exactly 3 vectors and so by Theorem 6 it must be a basis for 3 \ . ( ) ( ) ( ) 1 2 4 1,0,0 0,1, 1 0,2,0 = = − = v v v For the practice you should verify that this set does span 3 \ and is linearly independent. [Return to Problems] (b) 0 2 = p , 1 4x = − p , 2 2 1 x x = + + p , 3 2 7 x = + p and 2 4 5 1 x = − p for 2 P . We’ll go through this one a little faster. First, you should verify that the set of vectors does indeed span 2 P . Also, because ( ) 2 dim 3 P = we know that we’ll need to remove two of the vectors. Again, remember that each vector we remove must be a linear combination of some of the other vectors. First, it looks like 3 p is a linear combination of 0 p and 1 p (you should verify this) and so we can remove 3 p and the set will still span 2 P . This leaves us with the following set of vectors. Linear Algebra © 2007 Paul Dawkins 234 2 2 0 1 2 4 2 4 1 5 1 x x x x = = − = + + = − p p p p Now, it looks like 2 p can easily be written as a linear combination of the remaining vectors (again, please verify this) and so we can remove that one as well. We now have the following set, 2 0 1 4 2 4 5 1 x x = = − = − p p p which has 3 vectors and will span 2 P and so it must be a basis for 2 P by Theorem 6. [Return to Problems] Example 6 Expand each of the following sets of vectors into a basis for the given vector space. (a) ( ) 1 1,0,0,0 = v , ( ) 2 1,1,0,0 = v , ( ) 3 1,1,1,0 = v in 4 \ . [Solution] (b) 1 1 0 0 0 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v and 2 2 0 1 0 ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ v in 22 M . [Solution] Solution Theorem 7(b) tells us that this is possible to do provided the sets are linearly independent. (a) ( ) 1 1,0,0,0 = v , ( ) 2 1,1,0,0 = v , ( ) 3 1,1,1,0 = v in 4 \ . We’ll leave it to you to verify that these vectors are linearly independent. Also, ( ) 4 dim 4 = \ and so it looks like we’ll just need to add in a single vector to get a basis. Theorem 4(a) tells us that provided the vector we add in is not in the span of the original vectors we can retain the linear independence of the vectors. This will in turn give us a set of 4 linearly independent vectors and so by Theorem 6 will have to be a basis for 4 \ . Now, we need to find a vector that is not in the span of the given vectors. This is easy to do provided you notice that all of the vectors have a zero in the fourth component. This means that all the vectors that are in { } 1 2 3 span , , v v v will have a zero in the fourth component. Therefore, all that we need to do is take any vector that has a non-zero fourth component and we’ll have a vector that is outside { } 1 2 3 span , , v v v . Here are some possible vectors we could use, ( ) ( ) ( ) ( ) 0,0,0,1 0, 4,0,2 6, 3,2, 1 1,1,1,1 − − − The last one seems to be in keeping with the pattern of the original three vectors so we’ll use that one to get the following set of four vectors. ( ) ( ) ( ) ( ) 1 2 3 4 1,0,0,0 1,1,0,0 1,1,1,0 1,1,1,1 = = = = v v v v Since this set is still linearly independent and now has 4 vectors by Theorem 6 this set is a basis for 4 \ (you should verify this). [Return to Problems] Linear Algebra © 2007 Paul Dawkins 235 (b) 1 1 0 0 0 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v and 2 2 0 1 0 ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ v in 22 M . The two vectors here are linearly independent (verify this) and ( ) 22 dim 4 M = and so we’ll need to add in two vectors to get a basis. We will have to do this in two steps however. The first vector we add cannot be in { } 1 2 span , v v and the second vector we add cannot be in { } 1 2 3 span , , v v v where 3 v is the new vector we added in the first step. So, first notice that all the vectors in { } 1 2 span , v v will have zeroes in the second column so anything that doesn’t have a zero in at least one entry in the second column will work for 3 v . We’ll choose the following for 3 v . 3 0 1 0 1 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v Note that this is probably not the best choice since its got non-zero entries in both entries of the second column. It would have been easier to choose something that had a zero in one of the entries of the second column. However, if we don’t do that this will allow us make a point about choosing the second vector. Here is the list of vectors that we’ve got to this point. 1 2 3 1 0 2 0 0 1 0 0 1 0 0 1 ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ = = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ v v v Now, we need to find a fourth vector and it needs to be outside of { } 1 2 3 span , , v v v . Now, let’s again note that because of our choice of 3 v all the vectors in { } 1 2 3 span , , v v v will have identical numbers in both entries of the second column and so we can chose any new vector that does not have identical entries in the second column and we’ll have something that is outside of { } 1 2 3 span , , v v v . Again, we’ll go with something that is probably not the best choice if we had to work with this basis, but let’s not get too locked into always taking the easy choice. There are, on occasion, reason to choose vectors other than the “obvious” and easy choices. In this case we’ll use, 4 3 0 0 2 − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v This gives us the following set of vectors, 1 2 3 4 1 0 2 0 0 1 3 0 0 0 1 0 0 1 0 2 − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ = = = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ v v v v and they will be a basis for 22 M since these are four linearly independent vectors in a vector space with dimension of 4. [Return to Problems] We’ll close out this section with a couple of theorems and an example that will relate the dimensions of subspaces of a vector space to the dimension of the vector space itself. Linear Algebra © 2007 Paul Dawkins 236 Theorem 8 Suppose that W is a subspace of a finite dimensional vector space V then W is also finite dimensional. Proof : Suppose that ( ) dim V n = . Let’s also suppose that W is not finite dimensional and suppose that S is a basis for W. Since we’ve assumed that W is not finite dimensional we know that S will not have a finite number of vectors in it. However, since S is a basis for W we know that they must be linearly independent and we also know that they must be vectors in V. This however, means that we’ve got a set of more than n vectors that is linearly independent and this contradicts the results of Theorem 3(a). Therefore W must be finite dimensional as well. We can actually go a step further here than this theorem. Theorem 9 Suppose that W is a subspace of a finite dimensional vector space V then ( ) ( ) dim dim W V ≤ and if ( ) ( ) dim dim W V = then in fact we have W V = . Proof : By Theorem 8 we know that W must be a finite dimensional vector space and so let’s suppose that { } 1 2 , , , n S = w w w … is a basis for W. Now, S is either a basis for V or it isn’t a basis for V. If S is a basis for V then by Theorem 5 we have that ( ) ( ) dim dim V W n = = . On the other hand, if S is not a basis for V by Theorem 7(b) (the vectors of S must be linearly independent since they form a basis for W) it can be expanded into a basis for V and so we then know that ( ) ( ) dim dim W V < . So, we’ve shown that in every case we must have ( ) ( ) dim dim W V ≤ . Now, let’s just assume that all we know is that ( ) ( ) dim dim W V = . In this case S will be a set of n linearly independent vectors in a vector space of dimension n (since ( ) ( ) dim dim W V = ) and so by Theorem 6, S must be a basis for V as well. This means that any vector u from V can be written as a linear combination of vectors from S. However, since S is also a basis for W this means that u must also be in W. So, we’ve just shown that every vector in V must also be in W, and because W is a subspace of V we know that every vector in W is also in V. The only way for this to be true is if we have W V = . We should probably work one quick example illustrating this theorem. Linear Algebra © 2007 Paul Dawkins 237 Example 7 Determine a basis and dimension for the null space of 7 2 2 4 3 3 3 0 2 1 4 1 8 0 20 A − − ⎡ ⎤ ⎢ ⎥ = − − ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ Solution First recall that to find the null space of a matrix we need to solve the following system of equations, 1 2 3 4 5 7 2 2 4 3 0 3 3 0 2 1 0 4 1 8 0 20 0 x x x x x ⎡ ⎤ ⎢ ⎥ − − ⎡ ⎤ ⎡⎤ ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ − − = ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ − − ⎣ ⎦ ⎣⎦ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ We solved a similar system back in Example 7 of the Solving Systems of Equation section so we’ll leave it to you to verify that the solution is, 1 2 3 4 5 2 1 1 8 0 3 3 3 3 and are any numbers x t s x x t s x t x s s t = + = = + = = Now, recall that the null space of an n m × matrix will be a subspace of m \ so the null space of this matrix must be a subspace of 5 \ and so its dimension should be 5 or less. To verify this we’ll need the basis for the null space. This is actually easier to find than you might think. The null space will consist of all vectors in 5 \ that have the form, ( ) 1 2 3 4 5 , , , , 2 1 1 8 , 0, , , 3 3 3 3 x x x x x t s t s t s = ⎛ ⎞ = + + ⎜ ⎟ ⎝ ⎠ x Now, split this up into two vectors. One that contains only terms with a t in them and one that contains only term with an s in them. Then factor the t and s out of the vectors. 2 1 1 8 , 0, , , 0 , 0, , 0, 3 3 3 3 2 1 1 8 , 0, , 1, 0 , 0, , 0, 1 3 3 3 3 t t t s s s t s ⎛ ⎞ ⎛ ⎞ = + ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ⎛ ⎞ ⎛ ⎞ = + ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ x So, we can see that the null space is the space that is the set of all vectors that are a linear combination of 1 2 2 1 1 8 , 0, , 1, 0 , 0, , 0, 1 3 3 3 3 ⎛ ⎞ ⎛ ⎞ = = ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ v v and so the null space of A is spanned by these two vectors. You should also verify that these two vectors are linearly independent and so they in fact form a basis for the null space of A. This also means that the null space of A has a dimension of 2 which is less than 5 as Theorem 9 suggests it should be. Linear Algebra © 2007 Paul Dawkins 238 Linear Algebra © 2007 Paul Dawkins 239 Change of Basis In Example 1 of the previous section we saw that the vectors ( ) 1 1, 1,1 = − v , ( ) 2 0,1,2 = v and ( ) 3 3,0, 1 = − v formed a basis for 3 \ . This means that every vector in 3 \ , for example the vector ( ) 10,5,0 = x , can be written as a linear combination of these three vectors. Of course this is not the only basis for 3 \ . There are many other bases for 3 \ out there in the world, not the least of which is the standard basis for 3 \ , ( ) ( ) ( ) 1 2 3 1,0,0 0,1,0 0,0,1 = = = e e e The standard basis for any vector space is generally the easiest to work with, but unfortunately there are times when we need to work with other bases. In this section we’re going to take a look at a way to move between two different bases for a vector space and see how to write a general vector as a linear combination of the vectors from each basis. To start this section off we’re going to first need a way to quickly distinguish between the various linear combinations we get from each basis. The following definition will help with this. Definition 1 Suppose that { } 1 2 , , , n S = v v v … is a basis for a vector space V and that u is any vector from V. Since u is a vector in V it can be expressed as a linear combination of the vectors from S as follows, 1 1 2 2 n n c c c = + + + u v v v " The scalars 1 2 , , , n c c c … are called the coordinates of u relative to the basis S. The coordinate vectors of u relative to S is denoted by ( )S u and defined to be the following vector in n \ , ( ) ( ) 1 2 , , , n S c c c = u … Note that by Theorem 1 of the previous section we know that the linear combination of vectors from the basis will be unique for u and so the coordinate vector ( )S u will also be unique. Also, on occasion it will be convenient to think of the coordinate vector as a matrix. In these cases we will call it the coordinate matrix of u relative to S. The coordinate matrix will be denoted and defined as follows, [ ] 1 2 S n c c c ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ u # At this point we should probably also give a quick warning about the coordinate vectors. In most cases, although not all as we’ll see shortly, the coordinate vector/matrix is NOT the vector itself that we’re after. It is nothing more than the coefficients of the basis vectors that we need in order to write the given vector as a linear combination of the basis vectors. It is very easy to confuse the coordinate vector/matrix with the vector itself if we aren’t paying attention, so be careful. Linear Algebra © 2007 Paul Dawkins 240 Let’s see some examples of coordinate vectors. Example 1 Determine the coordinate vector of ( ) 10,5,0 = x relative to the following bases. (a) The standard basis vectors for 3 \ , { } 1 2 3 , , S = e e e . [Solution] (b) The basis { } 1 2 3 , , A = v v v where, ( ) 1 1, 1,1 = − v , ( ) 2 0,1,2 = v and ( ) 3 3,0, 1 = − v . [Solution] Solution In each case we’ll need to determine who to write ( ) 10,5,0 = x as a linear combination of the given basis vectors. (a) The standard basis vectors for 3 \ , { } 1 2 3 , , S = e e e . In this case the linear combination is simple to write down. ( ) 1 2 3 10,5,0 10 5 0 = = + + x e e e and so the coordinate vectors for x relative to the standard basis vectors for 3 \ is, ( ) ( ) 10,5,0 S = x So, in the case of the standard basis vectors we’ve got that, ( ) ( ) 10,5,0 S = = x x this is, of course, what makes the standard basis vectors so nice to work with. The coordinate vectors relative to the standard basis vectors is just the vector itself. [Return to Problems] (b) The basis { } 1 2 3 , , A = v v v where, ( ) 1 1, 1,1 = − v , ( ) 2 0,1,2 = v and ( ) 3 3,0, 1 = − v . Now, in this case we’ll have a little work to do. We’ll first need to set up the following vector equation, ( ) ( ) ( ) ( ) 1 2 3 10,5,0 1, 1,1 0,1,2 3,0, 1 c c c = − + + − and we’ll need to determine the scalars 1 c , 2 c and 3 c . We saw how to solve this kind of vector equation in both the section on Span and the section on Linear Independence. We need to set up the following system of equations, 1 3 1 2 1 2 3 3 10 5 2 0 c c c c c c c + = − + = + − = We’ll leave it to you to verify that the solution to this system is, 1 2 3 2 3 4 c c c = − = = The coordinate vector for x relative to A is then, ( ) ( ) 2,3,4 A = − x [Return to Problems] Linear Algebra © 2007 Paul Dawkins 241 As always we should do an example or two in a vector space other than n \ . Example 2 Determine the coordinate vector of 2 4 2 3 x x = − + p relative to the following bases. (a) The standard basis for 2 P { } 2 1, , S x x = . [Solution] (b) The basis for 2 P , { } 1 2 3 , , A = p p p , where 1 2 = p , 2 4x = − p , and 2 3 5 1 x = − p . [Solution] Solution (a) The standard basis for 2 P { } 2 1, , S x x = . So, we need to write p as a linear combination of the standard basis vectors in this case. However, it’s already written in that way. So, the coordinate vector for p relative to the standard basis vectors is, ( ) ( ) 4, 2,3 S = − p The ease with which we can write down this vector is why this set of vectors is standard basis vectors for 2 P . [Return to Problems] (b) The basis for 2 P , { } 1 2 3 , , A = p p p , where 1 2 = p , 2 4x = − p , and 2 3 5 1 x = − p . Okay, this set is similar to the standard basis vectors, but they are a little different so we can expect the coordinate vector to change. Note as well that we proved in Example 5(b) of the previous section that this set is a basis. We’ll need to find scalars 1 c , 2 c and 3 c for the following linear combination. ( ) ( ) ( ) 2 2 1 1 2 2 3 3 1 2 3 4 2 3 2 4 5 1 x x c c c c c x c x − + = + + = + − + − p p p The will mean solving the following system of equations. 1 3 2 3 2 4 4 2 5 3 c c c c − = − = − = This is not a terribly difficult system to solve. Here is the solution, 1 2 3 23 1 3 10 2 5 c c c = = = The coordinate vector for p relative to this basis is then, ( ) 23 1 3 , , 10 2 5 A ⎛ ⎞ = ⎜ ⎟ ⎝ ⎠ p [Return to Problems] Linear Algebra © 2007 Paul Dawkins 242 Example 3 Determine the coordinate vector of 1 0 1 4 − ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ v relative to the following bases. (a) The standard basis of 22 M , 1 0 0 0 0 1 0 0 , , , 0 0 1 0 0 0 0 1 S ⎧ ⎫ ⎡ ⎤⎡ ⎤⎡ ⎤⎡ ⎤ = ⎨ ⎬ ⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥ ⎣ ⎦⎣ ⎦⎣ ⎦⎣ ⎦ ⎩ ⎭ . [Solution] (b) The basis for 22 M , { } 1 2 3 4 , , , A = v v v v where 1 1 0 0 0 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v , 2 2 0 1 0 ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ v , 3 0 1 0 1 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v , and 4 3 0 0 2 − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v . [Solution] Solution (a) The standard basis of 22 M , 1 0 0 0 0 1 0 0 , , , 0 0 1 0 0 0 0 1 S ⎧ ⎫ ⎡ ⎤⎡ ⎤⎡ ⎤⎡ ⎤ = ⎨ ⎬ ⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥ ⎣ ⎦⎣ ⎦⎣ ⎦⎣ ⎦ ⎩ ⎭ . As with the previous two examples the standard basis is called that for a reason. It is very easy to write any 2 2 × matrix as a linear combination of these vectors. Here it is for this case. ( ) ( ) ( ) ( ) 1 0 1 0 0 0 0 1 0 0 1 1 0 4 1 4 0 0 1 0 0 0 0 1 − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ = − + + + − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ The coordinate vector for v relative to the standard basis is then, ( ) ( ) 1,1,0, 4 S = − − v [Return to Problems] (b) The basis for 22 M , { } 1 2 3 4 , , , A = v v v v where 1 1 0 0 0 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v , 2 2 0 1 0 ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ v , 3 0 1 0 1 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v , and 4 3 0 0 2 − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v . This one will be a little work, as usual, but won’t be too bad. We’ll need to find scalars 1 c , 2 c , 3 c and 4 c for the following linear combination. 1 2 3 4 1 0 1 0 2 0 0 1 3 0 1 4 0 0 1 0 0 1 0 2 c c c c − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ = + + + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ Adding the matrices on the right into a single matrix and setting components equal gives the following system of equations that will need to be solved. 1 2 4 2 3 3 4 2 3 1 1 0 2 4 c c c c c c c + − = − − = = + = − Not a bad system to solve. Here is the solution. 1 2 3 4 5 1 0 2 c c c c = − = − = = − Linear Algebra © 2007 Paul Dawkins 243 The coordinate vector for v relative to this basis is then, ( ) ( ) 5, 1,0, 2 A = − − − v [Return to Problems] Before we move on we should point out that the order in which we list our basis elements is important, to see this let’s take a look at the following example. Example 4 The vectors ( ) 1 1, 1,1 = − v , ( ) 2 0,1,2 = v and ( ) 3 3,0, 1 = − v form a basis for 3 \ . Let { } 1 2 3 , , A = v v v and { } 2 3 1 , , B = v v v be different orderings of these vectors and determine the vector in 3 \ that has the following coordinate vectors. (a) ( ) ( ) 3, 1,8 A = − x (b) ( ) ( ) 3, 1,8 B = − x Solution So, these are both the same coordinate vector, but they are relative to different orderings of the basis vectors. Determining the vector in 3 \ for each is a simple thing to do. Recall that the coordinate vector is nothing more than the scalars in the linear combination and so all we need to do is reform the linear combination and then multiply and add everything out to determine the vector. The one thing that we need to be careful of order however. The first scalar is the coefficient of the first vector listed in the set, the second scalar in the coordinate vector is the coefficient for the second vector listed, etc. (a) Here is the work for this part. ( ) ( )( ) ( )( ) ( ) 3 1, 1,1 1 0,1,2 8 3,0, 1 27, 4, 7 = − + − + − = − − x (b) And here is the work for this part. ( ) ( )( ) ( )( ) ( ) 3 0,1,2 1 3,0, 1 8 1, 1,1 5, 5,15 = + − − + − = − x So, we clearly get different vectors simply be rearranging the order of the vectors in our basis. Now that we’ve got the coordinate vectors out of the way we want to find a quick and easy way to convert between the coordinate vectors from one basis to a different basis. This is called a change of basis. Actually, it will be easier to convert the coordinate matrix for a vector, but these are essentially the same thing as the coordinate vectors so if we can convert one we can convert the other. We will develop the method for vectors in a 2-dimensional space (not necessarily 2 \ ) and in the process we will see how to do this for any vector space. So let’s start off and assume that V is a vector space and that ( ) dim 2 V = . Let’s also suppose that we have two bases for V. The “old” basis, { } 1 2 , B = v v and the “new” basis, Linear Algebra © 2007 Paul Dawkins 244 { } 1 2 , C = w w Now, because B is a basis for V we can write each of the basis vectors from C as a linear combination of the vectors from B. 1 1 2 2 1 2 a b c d = + = + w v v w v v This means that the coordinate matrices of the vectors from C relative to the basis B are, [ ] [ ] 1 2 B B a c b d ⎡⎤ ⎡⎤ = = ⎢⎥ ⎢⎥ ⎣⎦ ⎣⎦ w w Next, let u be any vector in V. In terms of the new basis, C, we can write u as, 1 1 2 2 c c = + u w w and so its coordinate matrix relative to C is, [ ] 1 2 C c c ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ u Now, we know how to write the basis vectors from C as linear combinations of the basis vectors from B so substitute these into the linear combination for u above. This gives, ( ) ( ) 1 1 2 2 1 2 c a b c c d = + + + u v v v v Rearranging gives the following equation. ( ) ( ) 1 2 1 1 2 2 ac cc bc d c = + + + u v v We now know the coordinate matrix of u is relative to the “old” basis B. Namely, [ ] 1 2 1 2 B ac cc bc d c + ⎡ ⎤ = ⎢ ⎥ + ⎣ ⎦ u We can now do a little rewrite as follows, [ ] [ ] 1 2 1 1 2 2 B C ac cc c a c a c bc d c c b d b d + ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ = = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ + ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ u u So, if we define P to be the matrix, a c P b d ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ where the columns of P are the coordinate matrices for the basis vectors of C relative to B, we can convert the coordinate matrix for u relative to the new basis C into a coordinate matrix for u relative to the old basis B as follows, [ ] [ ] B C P = u u Note that this may seem a little backwards at this point. We’re converting to a new basis C and yet we’ve found a way to instead find the coordinate matrix for u relative to the old basis B and Linear Algebra © 2007 Paul Dawkins 245 not the other way around. However, as we’ll see we can use this process to go the other way around. Also, it could be that we have a coordinate matrix for a vector relative to the new basis and we need to determine what the coordinate matrix relative to the old basis will be and this will allow us to do that. Here is the formal definition of how to perform a change of basis between two basis sets. Definition 2 Suppose that V is a n-dimensional vector space and further suppose that { } 1 2 , , , n B = v v v … and { } 1 2 , , , n C = w w w … are two bases for V. The transition matrix from C to B is defined to be, [ ] [ ] [ ] 1 2 n B B B P ⎡ ⎤ = ⎣ ⎦ w w w " where the ith column of P is the coordinate matrix of i w relative to B. The coordinate matrix of a vector u in V, relative to B, is then related to the coordinate matrix of u relative to C by the following equation. [ ] [ ] B C P = u u We should probably take a look at an example or two at this point. Example 5 Consider the standard basis for 3 \ , { } 1 2 3 , , B = e e e , and the basis { } 1 2 3 , , C = v v v where, ( ) 1 1, 1,1 = − v , ( ) 2 0,1,2 = v and ( ) 3 3,0, 1 = − v . (a) Find the transition matrix from C to B. [Solution] (b) Find the transition matrix from B to C. [Solution] (c) Use the result of part (a) to compute [ ]B u given ( ) ( ) 2,3,4 C = − u . [Solution] (d) Use the result of part (a) to compute [ ]B u given ( ) ( ) 9, 1, 8 C = −− u . [Solution] (e) Use the result of part (b) to compute [ ]C u given ( ) ( ) 10,5,0 B = u . [Solution] (f) Use the result of part (b) to compute [ ]C u given ( ) ( ) 6,7,2 B = − u . [Solution] Solution Note as well that we gave the coordinate vector in the last four parts of the problem statement to conserve on space. When we go to work with them we’ll need to convert to them to a coordinate matrix. (a) Find the transition matrix from C to B. When the basis we’re going to (B in this case) is the standard basis vectors for the vector space computing the transition matrix is generally pretty simple. Recall that the columns of P are just the coordinate matrices of the vectors in C relative to B. However, when B is the standard basis vectors we saw in Example 1 above that the coordinate vector (and hence the coordinate matrix) is simply the vector itself. Therefore, the coordinate matrix in this case is, 1 0 3 1 1 0 1 2 1 P ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ [Return to Problems] Linear Algebra © 2007 Paul Dawkins 246 (b) Find the transition matrix from B to C. First, do not make the mistake of thinking that the transition matrix here will be the same as the transition matrix from part (a). It won’t be. To find this transition matrix we need the coordinate matrices of the standard basis vectors relative to C. This means that we need to write each of the standard basis vectors as linear combinations of the basis vectors from C. We will leave it to you to verify the following linear combinations. 1 1 2 3 2 1 2 3 3 1 2 3 1 1 3 10 10 10 3 2 1 5 5 5 3 3 1 10 10 10 = + + = − + + = + − e v v v e v v v e v v v The coordinate matrices for each of this is then, [ ] [ ] [ ] 3 3 1 10 5 10 3 1 2 1 2 3 10 5 10 3 1 1 10 5 10 C C C − ⎡⎤ ⎡ ⎤ ⎡ ⎤ ⎢⎥ ⎢ ⎥ ⎢ ⎥ = = = ⎢⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎢ ⎥ − ⎣⎦ ⎣ ⎦ ⎣ ⎦ e e e The transition matrix from C to B is then, 3 3 1 10 5 10 3 1 2 10 5 10 3 1 1 10 5 10 P − ⎡ ⎤ ⎢ ⎥ ′ = ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ So, a significantly different matrix as suggested at the start of this problem. Also, notice we used a slightly different notation for the transition matrix to make sure that we can keep the two transition matrices separate for this problem. [Return to Problems] (c) Use the result of part (a) to compute [ ]B u given ( ) ( ) 2,3,4 C = − u . Okay, we’ve done most of the work for this problem. The remaining steps are just doing some matrix multiplication. Note as well that we already know what the answer to this is from Example 1 above. Here is the matrix multiplication for this part. [ ] 1 0 3 2 10 1 1 0 3 5 1 2 1 4 0 B − ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = − = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − ⎣ ⎦⎣ ⎦ ⎣ ⎦ u Sure enough we got the coordinate matrix for the point that we converted to get ( ) ( ) 2,3,4 C = − u from Example 1. [Return to Problems] Linear Algebra © 2007 Paul Dawkins 247 (d) Use the result of part (a) to compute [ ]B u given ( ) ( ) 9, 1, 8 C = −− u . The matrix multiplication for this part is, [ ] 1 0 3 9 15 1 1 0 1 10 1 2 1 8 15 B − ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = − − = − ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦⎣ ⎦ ⎣ ⎦ u So, what have we learned here? Well, we were given the coordinate vector of a point relative to C. Since the vectors in C are not the standard basis vectors we don’t really have a frame of reference for what this vector might actually look like. However, with this computation we know now the coordinates of the vectors relative to the standard basis vectors and this means that we actually know what the vector is. In this case the vector is, ( ) 15, 10,15 = − − u So, as you can see, even though we’re considering C to be the “new” basis here, we really did need to determine the coordinate matrix of the vector relative to the “old’ basis here since that allowed us to quickly determine just what the vector was. Remember that the coordinate matrix/vector is not the vector itself, only the coefficients for the linear combination of the basis vectors. [Return to Problems] (e) Use the result of part (b) to compute [ ]C u given ( ) ( ) 10,5,0 B = u . Again, here we are really just verifying the result of Example 1 in this part. Here is the matrix multiplication for this part. [ ] 3 3 1 10 5 10 3 1 2 10 5 10 3 1 1 10 5 10 10 2 5 3 0 4 C − − ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ u And again, we got the result that we would expect to get. [Return to Problems] (f) Use the result of part (b) to compute [ ]C u given ( ) ( ) 6,7,2 B = − u . Here is the matrix multiplication for this part. [ ] 3 3 27 1 10 5 10 5 3 8 1 2 10 5 10 5 3 1 1 1 10 5 10 5 6 7 2 C − − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ u So what does this give us? We’ll first we know that ( ) 27 8 1 , , 5 5 5 C ⎛ ⎞ = − − ⎜ ⎟ ⎝ ⎠ u . Also, since B is the standard basis vectors we know that the vector from 3 \ that we’re starting with is ( ) 6,7,2 − . Recall that when dealing with the standard basis vectors for 3 \ the coordinate matrix/vector just Linear Algebra © 2007 Paul Dawkins 248 also happens to be the vector itself. Again, do not always expect this to happen. The coordinate matrix/vector that we just found tells us how to write the vector as a linear combination of vectors from the basis C. Doing this gives, ( ) 1 2 3 27 8 1 6,7,2 5 5 5 − = − + − v v v [Return to Problems] Example 6 Consider the standard basis for 2 P , { } 2 1, , B x x = and the basis { } 1 2 3 , , C = p p p , where 1 2 = p , 2 4x = − p , and 2 3 5 1 x = − p . (a) Find the transition matrix from C to B. [Solution] (b) Determine the polynomial that has the coordinate vector ( ) ( ) 4,3,11 C = − p . [Solution] Solution (a) Find the transition matrix from C to B. Now, since B (the matrix we’re going to) is the standard basis vectors writing down the transition matrix will be easy this time. 2 0 1 0 4 0 0 0 5 P − ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Each column of P will be the coefficients of the vectors from C since those will also be the coordinates of each of those vectors relative to the standard basis vectors. The first row will be the constant terms from each basis vector, the second row will be the coefficient of x from each basis vector and the third column will be the coefficient of 2 x from each basis vector. [Return to Problems] (b) Determine the polynomial that has the coordinate vector ( ) ( ) 4,3,11 C = − p . We know what the coordinates of the polynomial are relative to C, but this is not the standard basis and so it is not really clear just what the polynomial is. One way to get the solution is to just form up the linear combination with the coordinates as the scalars in the linear combination and compute it. However, it would be somewhat illustrative to use the transition matrix to answer this question. So, we need to find [ ]B p and luckily we’ve got the correct transition matrix to do that for us. All we need to do is to do the following matrix multiplication. [ ] [ ] 2 0 1 4 19 0 4 0 3 12 0 0 5 11 55 B C P = − − − ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = − = − ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ p p So, the coordinate vector for u relative to the standard basis vectors is Linear Algebra © 2007 Paul Dawkins 249 ( ) ( ) 19, 12,55 B = − − p Therefore, the polynomial is, ( ) 2 19 12 55 p x x x = − − + Note that, as mentioned above we can also do this problem as follows, ( ) ( ) ( ) ( ) 2 2 1 2 3 4 3 11 4 2 3 4 11 5 1 19 12 55 p x x x x x = − + + = − + − + − = − − + p p p The same answer with less work, but it won’t always be less work to do it this way. We just wanted to point out the alternate method of working this problem. [Return to Problems] Example 7 Consider the standard basis for 22 M , 1 0 0 0 0 1 0 0 , , , 0 0 1 0 0 0 0 1 B ⎧ ⎫ ⎡ ⎤⎡ ⎤⎡ ⎤⎡ ⎤ = ⎨ ⎬ ⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥ ⎣ ⎦⎣ ⎦⎣ ⎦⎣ ⎦ ⎩ ⎭ , and the basis { } 1 2 3 4 , , , C = v v v v where 1 1 0 0 0 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v , 2 2 0 1 0 ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ v , 3 0 1 0 1 ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v , and 4 3 0 0 2 − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v . (a) Find the transition matrix from C to B. [Solution] (b) Determine the matrix that has the coordinate vector ( ) ( ) 8,3,5, 2 C = − − v . [Solution] Solution (a) Find the transition matrix from C to B. Now, as with the previous couple of problems, B is the standard basis vectors but this time let’s be a little careful. Let’s find one of the columns of the transition matrix in detail to make sure we can quickly write down the remaining columns. Let’s look at the fourth column. To find this we need to write 4 v as a linear combination of the standard basis vectors. This is fairly simple to do. ( ) ( ) ( ) ( ) 3 0 1 0 0 0 0 1 0 0 3 0 0 2 0 2 0 0 1 0 0 0 0 1 − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ = − + + + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ So, the coordinate matrix for 4 v relative to B and hence the fourth column of P is, [ ] 4 3 0 0 2 B − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ v So, each column will be the entries from the i v ’s and with the standard basis vectors in the order that we’ve using them here, the first two entries is the first column of the i v and the last two entries will be the second column of i v . Here is the transition matrix for this problem. Linear Algebra © 2007 Paul Dawkins 250 1 2 0 3 0 1 0 0 0 0 1 0 0 0 1 2 P − ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ [Return to Problems] (b) Determine the matrix that has the coordinate vector ( ) ( ) 8,3,5, 2 C = − − v . So, just as with the previous problem we have the coordinate vector, but that is for the non-standard basis vectors and so it’s not readily apparent what the matrix will be. As with the previous problem we could just write down the linear combination of the vectors from C and compute it directly, but let’s go ahead and used the transition matrix. [ ] 1 2 0 3 8 4 0 1 0 0 3 3 0 0 1 0 5 5 0 0 1 2 2 1 B − − ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − ⎣ ⎦⎣ ⎦ ⎣ ⎦ v Now that we’ve got the coordinates for v relative to the standard basis we can write down v. 4 5 3 1 ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ v [Return to Problems] To this point we’ve only worked examples where one of the bases was the standard basis vectors. Let’s work one more example and this time we’ll avoid the standard basis vectors. In this example we’ll just find the transition matrices. Example 8 Consider the two bases for 2 \ , ( ) ( ) { } 1, 1 , 0,6 B = − and ( ) ( ) { } 2,1 , 1,4 C = − . (a) Find the transition matrix from C to B. [Solution] (b) Find the transition matrix from B to C. [Solution] Solution Note that you should verify for yourself that these two sets of vectors really are bases for 2 \ as we claimed them to be. (a) Find the transition matrix from C to B. To do this we’ll need to write the vectors from C as linear combinations of the vectors from B. Here are those linear combinations. ( ) ( ) ( ) ( ) ( ) ( ) 1 2,1 2 1, 1 0,6 2 1 1,4 1, 1 0,6 2 = − + − = − − + The two coordinate matrices are then, Linear Algebra © 2007 Paul Dawkins 251 ( ) ( ) 1 1 2 2 2 1 2,1 1,4 B B − ⎡⎤ ⎡ ⎤ = − = ⎡ ⎤ ⎡ ⎤ ⎢⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣⎦ ⎣ ⎦ and the transition matrix is then, 1 1 2 2 2 1 P − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ [Return to Problems] (b) Find the transition matrix from B to C. Okay, we’ll need to do pretty much the same thing here only this time we need to write the vectors from B as linear combinations of the vectors from C. Here are the linear combinations. ( ) ( ) ( ) ( ) ( ) ( ) 1 1 1, 1 2,1 1,4 3 3 2 4 0,6 2,1 1,4 3 3 − = − − = + − The coordinate matrices are, ( ) ( ) 1 2 3 3 1 4 3 3 1, 1 0,6 C C ⎡ ⎤ ⎡⎤ − = = ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢⎥ ⎣ ⎦ ⎣ ⎦ − ⎣ ⎦ ⎣⎦ The transition matrix is, 1 2 3 3 1 4 3 3 P ⎡ ⎤ ′ = ⎢ ⎥ − ⎣ ⎦ [Return to Problems] In Examples 5 and 8 above we computed both transition matrices for each direction. There is another way of computing the second transition matrix from the first and we will close out this section with the theorem that tells us how to do that. Theorem 1 Suppose that V is a finite dimensional vector space and that P is the transition matrix from C to B then, (a) P is invertible and, (b) 1 P− is the transition matrix from B to C. You should go back to Examples 5 and 8 above and verify that the two transition matrices are in fact inverses of each other. Also, note that due to the difficulties sometimes present in finding the inverse of a matrix it might actually be easier to compute the second transition matrix as we did above. Linear Algebra © 2007 Paul Dawkins 252 Fundamental Subspaces In this section we want to take a look at some important subspaces that are associated with matrices. In fact they are so important that they are often called the fundamental subspaces of a matrix. We’ve actually already seen one of the fundamental subspaces, the null space, previously although we will give its definition here again for the sake of completeness. Before we give the formal definitions of the fundamental subspaces we need to quickly review a concept that we first saw back when we were looking at matrix arithmetic. Given an n m × matrix 11 12 1 21 22 2 1 2 m m n n nm a a a a a a A a a a ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ " " # # # " The row vectors (we called them row matrices at the time) are the vectors in m R formed out of the rows of A. The column vectors (again we called them column matrices at the time) are the vectors in n \ that are formed out of the columns of A. Example 1 Write down the row vectors and column vectors for 1 5 0 4 9 2 3 7 A − ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Solution The row vectors are, [ ] [ ] [ ] [ ] 1 2 3 4 1 5 0 4 9 2 3 7 = − = − = = − r r r r The column vectors are 1 2 1 5 0 4 9 2 3 7 − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ c c Note that despite the fact that we’re calling them vectors we are using matrix notation for them. The reason is twofold. First, they really are row/column matrices and so we may as well denote them as such and second in this way we can keep the “orientation” of each to remind us whether or not they are row vectors or column vectors. In other words, row vectors are listed horizontally and column vectors are listed vertically. Because we’ll be using the matrix notation for the row and column vectors we’ll be using matrix notation for vectors in general in this section so we won’t be mixing and matching the notations too much. Linear Algebra © 2007 Paul Dawkins 253 Here then are the definitions of the three fundamental subspaces that we’ll be investigating in this section. Definition 1 Suppose that A is an n m × matrix. (a) The subspace of m R that is spanned by the row vectors of A is called the row space of A. (b) The subspace of n R that is spanned by the column vectors of A is called the column space of A. (c) The set of all x in m \ such that A = x 0 (which is a subspace of m \ by Theorem 2 from the Subspaces section) is called the null space of A. We are going to be particularly interested in the basis for each of these subspaces and that in turn means that we’re going to be able to discuss the dimension of each of them. At this point we can give the notation for the dimension of the null space, but we’ll need to wait a bit before we do so for the row and column spaces. The reason for the delay will be apparent once we reach that point. So, let’s go ahead and give the notation for the null space. Definition 2 The dimension of the null space of A is called the nullity of A and is denoted by ( ) nullity A . We should work an example at this point. Because we’ve already seen how to find the basis for the null space (Example 4(b) in the Subspaces section and Example 7 of the Basis section) we’ll do one example at this point and then devote the remainder of the discussion on basis/dimension of these subspaces to finding the basis/dimension for the row and column space. Note that we will see an example or two later in this section of null spaces as well. Example 2 Determine a basis for the null space of the following matrix. 2 4 1 2 2 3 1 2 0 0 1 1 10 4 2 4 2 4 A − − − ⎡ ⎤ ⎢ ⎥ = − − ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ Solution So, to find the null space we need to solve the following system of equations. 1 2 3 4 5 6 1 2 5 6 1 2 3 4 5 6 2 4 2 2 3 0 2 0 10 4 2 4 2 4 0 x x x x x x x x x x x x x x x x − + + − − = − + + − = − − + − + = We’ll leave it to you to verify that the solution is given by, 1 2 3 4 5 6 1 1 2 5 2 2 , , are any real numbers x t r x t s r x t r x t x s x r t s r = −+ = − − + = − + = = = In matrix form the solution can be written as, Linear Algebra © 2007 Paul Dawkins 254 1 1 1 1 2 2 2 2 1 0 1 1 2 5 2 0 5 1 0 0 0 1 0 0 0 1 t r t s r t r t s r t s r −+ − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ − − + − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ − + − = = + + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣⎦ x So, the solution can be written as a linear combination of the three linearly independent vectors (verify the linearly independent claim!) 1 1 2 2 1 2 3 1 0 1 1 2 0 5 1 0 0 0 1 0 0 0 1 − ⎡ ⎤ ⎡ ⎤ ⎡⎤ ⎢ ⎥ ⎢ ⎥ ⎢⎥ − − ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ − = = = ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎣ ⎦ ⎣ ⎦ ⎣⎦ x x x and so these three vectors then form the basis for the null space since they span the null space and are linearly independent. Note that this also means that the null space has a dimension of 3 since there are three basis vectors for the null space and so we can see that ( ) nullity 3 A = Again, remember that we’ll be using matrix notation for vectors in this section. Okay, now that we’ve gotten an example of the basis for the null space taken care of we need to move onto finding bases (and hence the dimensions) for the row and column spaces of a matrix. However, before we do that we first need a couple of theorems out of the way. The first theorem tells us how to find the basis for a matrix that is in row-echelon form. Theorem 1 Suppose that the matrix U is in row-echelon form. The row vectors containing leading 1’s (so the non-zero row vectors) will form a basis for the row space of U. The column vectors that contain the leading 1’s from the row vectors will form a basis for the column space of U. Example 3 Find the basis for the row and column space of the following matrix. 1 5 2 3 5 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0 U − ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Solution Okay, the basis for the row space is simply all the row vectors that contain a leading 1. So, for this matrix the basis for the row space is, [ ] [ ] [ ] 1 2 3 1 5 2 3 5 0 0 1 1 0 0 0 0 0 1 = − = − = r r r Linear Algebra © 2007 Paul Dawkins 255 We can also see that the dimension of the row space will be 3. The basis for the column space will be the columns that contain leading 1’s and so for this matrix the basis for the column space will be, 1 3 5 1 2 5 0 1 0 0 0 1 0 0 0 − ⎡⎤ ⎡ ⎤ ⎡⎤ ⎢⎥ ⎢ ⎥ ⎢⎥ ⎢⎥ ⎢ ⎥ ⎢⎥ = = = ⎢⎥ ⎢ ⎥ ⎢⎥ ⎢⎥ ⎢ ⎥ ⎢⎥ ⎣⎦ ⎣ ⎦ ⎣⎦ c c c Note that we subscripted the vectors here with the column that each came out of. We will generally do that for these problems. Also note that the dimension of the column space is 3 as well. Now, all of this is fine provided we have a matrix in row-echelon form. However, as we know, most matrices will not be in row-echelon form. The following two theorems will tell us how to find the basis for the row and column space of a general matrix. Theorem 2 Suppose that A is a matrix and U is a matrix in row-echelon form that has been obtained by performing row operations on A. Then the row space of A and the row space of U are the same space. So, how does this theorem help us? We’ll if the matrix A and U have the same row space then if we know a basis for one of them we will have a basis for the other. Notice as well that we assumed the matrix U is in row-echelon form and we do know how to find a basis for its row space. Therefore, to find a basis for the row space of a matrix A we’ll need to reduce it to row-echelon form. Once in row-echelon form we can write down a basis for the row space of U, but that is the same as the row space of A and so that set of vectors will also be a basis for the row space of A. So, what about a basis for the column space? That’s not quite as straight forward, but is almost as simple. Theorem 3 Suppose that A and B are two row equivalent matrices (so we got from one to the other by row operations) then a set of column vectors from A will be a basis for the column space of A if and only if the corresponding columns from B will form a basis for the column space of B. How does this theorem help us to find a basis for the column space of a general matrix? We’ll let’s start with a matrix A and reduce it to row-echelon form, U, (which we’ll need for a basis for the row space anyway). Now, because we arrived at U by applying row operations to A we know that A and U are row equivalent. Next, from Theorem 1 we know how to identify the columns from U that will form a basis for the column space of U. These columns will probably not be a basis for the column space of A however, what Theorem 3 tells us is that corresponding columns from A will form a basis for the columns space of A. For example, suppose the columns 1, 2, 5 and 8 from U form a basis for the column space of U then columns 1, 2, 5 and 8 from A will form a basis for the column space of A. Before we work an example we can now talk about the dimension of the row and column space of a matrix A. From our theorems above we know that to find a basis for both the row and column space of a matrix A we first need to reduce it to row-echelon form and we can get a basis for the row and column space from that. Linear Algebra © 2007 Paul Dawkins 256 Let’s go back and take a look at Theorem 1 in a little more detail. According to this theorem the rows with leading 1’s will form a basis for the row space and the columns that containing the same leading 1’s will form a basis for the column space. Now, there are a fixed number of leading 1’s and each leading 1 will be in a separate column. For example, there won’t be two leading 1’s in the second column because that would mean that the upper 1 (one) would not be a leading 1. Think about this for a second. If there are k leading 1’s in a row-echelon matrix then there will be k row vectors in a basis for the row space and so the row space will have a dimension of k. However, since each of the leading 1’s will be in separate columns there will also be k column vectors that will form a basis for the column space and so the column space will also have a dimension of k. This will always happen and this is the reason that we delayed talking about the dimension of the row and column space above. We needed to get a couple of theorems out of the way so we could give the following theorem/definition. Theorem 4 Suppose that A is a matrix then the row space of A and the column space of A will have the same dimension. We call this common dimension the rank of A and denote it by ( ) rank A . Note that if A is an n m × matrix we know that the row space will be a subspace of m R and hence have a dimension of m or less and that the column space will be a subspace of n R and hence have a dimension of n or less. Then, because we know that the dimension of the row and column space must be the same we have the following upper bound for the rank of a matrix. ( ) ( ) rank min , A n m ≤ We should now work an example. Example 4 Find a basis for the row and column space of the matrix from Example 2 above. Determine the rank of the matrix. Solution Before starting this example let’s note that by the upper bound for the rank above we know that the largest that the rank can be is 3 since that is the smaller of the number of rows and columns in A. So, the first thing that we need to do is get the matrix into row-echelon form. We will leave it to you to verify that the following is one possible row echelon form for the matrix from Example 2 above. If you need a refresher on how to reduce a matrix to row-echelon form you can go back to the section on Solving Systems of Equations for a refresher. Also, recall that there is more than one possible row-echelon form for a given matrix. 3 1 1 1 8 4 2 8 1 2 0 0 1 1 0 1 0 0 1 2 0 5 U − − ⎡ ⎤ ⎢ ⎥ = − − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ So, a basis for the row space of the matrix will be every row that contains a leading 1 (all of them in this case). A basis for the row space is then, Linear Algebra © 2007 Paul Dawkins 257 [ ] [ ] 3 1 1 1 1 2 8 4 2 8 3 1 2 0 0 1 1 0 1 0 0 1 2 0 5 = − − = − − ⎡ ⎤ ⎣ ⎦ = − r r r Next, the first three columns of U will form a basis for the column space of U since they all contain the leading 1’s. Therefore the first three columns of A will form a basis for the column space of A. This gives the following basis for the column space of A. 1 2 3 2 4 1 1 2 0 10 4 2 − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = − = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ c c c Now, as Theorem 4 suggested both the row space and the column space of A have dimension 3 and so we have that ( ) rank 3 A = Before going on to another example let’s stop for a bit and take a look at the results of Examples 2 and 4. From these two examples we saw that the rank and nullity of the matrix used in those examples were both 3. The fact that they were the same won’t always happen as we’ll see shortly and so isn’t all that important. What is important to note is that 3 3 6 + = and there were 6 columns in this matrix. This in fact will always be the case. Theorem 5 Suppose that A is an n m × matrix. Then, ( ) ( ) rank nullity A A m + = Let’s take a look at a couple more examples now. Example 5 Find a basis for the null space, row space and column space of the following matrix. Determine the rank and nullity of the matrix. 1 2 1 5 6 4 4 4 12 8 2 0 6 2 4 3 1 7 2 12 A − − ⎡ ⎤ ⎢ ⎥ − − − − ⎢ ⎥ = ⎢ ⎥ − − ⎢ ⎥ − − ⎣ ⎦ Solution Before we get started we can notice that the rank can be at most 4 since that is smaller of the number of rows and number of columns. We’ll find the null space first since that was the first thing asked for. To do this we’ll need to solve the following system of equations. 1 2 3 4 5 1 2 3 4 5 1 3 4 5 1 2 3 4 5 2 5 6 0 4 4 4 12 8 0 2 6 2 4 0 3 7 2 12 0 x x x x x x x x x x x x x x x x x x x − + − + + = − − − − = − − + = − + + − + = You should verify that the solution is, Linear Algebra © 2007 Paul Dawkins 258 1 2 3 4 5 3 2 8 2 and are any real numbers x t x t s x t x s x s s t = = − = = = The null space is then given by, 3 3 0 2 8 2 8 1 0 2 0 2 0 1 t t s t s t s s ⎡ ⎤ ⎡⎤ ⎡ ⎤ ⎢ ⎥ ⎢⎥ ⎢ ⎥ − − ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ = = + ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎣ ⎦ ⎣⎦ ⎣ ⎦ x and so we can see that a basis for the null space is, 1 2 3 0 2 8 1 0 0 2 0 1 ⎡⎤ ⎡ ⎤ ⎢⎥ ⎢ ⎥ − ⎢⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ = = ⎢⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎣⎦ ⎣ ⎦ x x Therefore we now know that ( ) nullity 2 A = . At this point we know the rank of A by Theorem 5 above. According to this theorem the rank must be, ( ) ( ) rank # columns nullity 5 2 3 A A = − = − = This will give us a nice check when we find a basis for the row space and the column space. We now know that each should contain three vectors. Speaking of which, let’s get a basis for the row space and the column space. We’ll need to reduce A to row-echelon form first. We’ll leave it to you to verify that a possible row-echelon form for A is, 1 2 1 5 6 0 1 2 2 4 0 0 0 1 2 0 0 0 0 0 U − − − ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ − ⎢ ⎥ ⎣ ⎦ The rows containing leading 1’s will form a basis for the row space of A and so this basis is, [ ] [ ] [ ] 1 2 3 1 2 1 5 6 0 1 2 2 4 0 0 0 1 2 = − − − = − = − r r r Next, the first, second and fourth columns of U contain leading 1’s and so will form a basis for the column space of U and this tells us that the first, second and fourth columns of A will form a basis for the column space of A. Here is that basis. Linear Algebra © 2007 Paul Dawkins 259 1 2 3 1 2 5 4 4 12 2 0 2 3 1 2 − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ c c c Note that the dimension of each of these is 3 as we noted it should be above. Example 6 Find a basis for the null space, row space and column space of the following matrix. Determine the nullity and rank of this matrix. 6 3 2 3 8 4 A − ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Solution In this case we can notice that the rank of this matrix can be at most 2 since that is the minimum of the number of rows and number of columns. To find the null space we’ll need to solve the following system of equations, 1 2 1 2 1 2 6 3 0 2 3 0 8 4 0 x x x x x x − = − + = − + = We’ll leave it to you to verify that the solution to this system is, 1 2 0 0 x x = = This is actually the point to this problem. There is only a single solution to the system above, namely the zero vector, 0. Therefore the null space consists solely of the zero vector and vector spaces that consist solely of the zero vector do not have a basis and so we can’t give one. Also, vector spaces consisting solely of the zero vectors are defined to have a dimension of zero. Therefore, the nullity of this matrix is zero. This also tells us that the rank of this matrix must be 2 by Theorem 5. Let’s now find a basis for the row space and the column space. You should verify that one possible row-reduced form for A is, 1 2 1 0 1 0 0 U − ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ A basis for the row space of A is then, [ ] [ ] 1 1 2 2 1 0 1 = − = r r and since both columns of U form a basis for the column space of U both columns from A will form a basis for the column space of A. The basis for the column space of A is then, 1 2 6 3 2 3 8 4 − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ c c Once again, both have dimension of 2 as we knew they should from our use of Theorem 5 above. Linear Algebra © 2007 Paul Dawkins 260 In all of the examples that we’ve worked to this point in finding a basis for the row space and the column space we should notice that the basis we found for the column space consisted of columns from the original matrix while the basis we found for the row space did not consist of rows from the original matrix. Also note that we can’t necessarily use the same idea we used to get a basis for the column space to get a basis for the row space. For example let’s go back and take a look at Example 5. The first three rows of U formed a basis for the row space, but that does not mean that the first three rows of A will also form a basis for the row space. In fact, in this case they won’t. In this case the third row is twice the first row added onto the second row and so the first three rows are not linearly independent (which you’ll recall is required for a set of vectors to be a basis). So, what do we do if we do want rows from the original matrix to form our basis? The answer to this is surprisingly simple. Example 7 Find a basis for the row space of the matrix in Example 5 that consists of rows from the original matrix. Solution The first thing that we’ll do is take the transpose of A. In doing so the rows of A will become the columns of T A . This means that the row space of A will become the column space of T A . Recall as well that we find a basis for the column space in terms of columns from the original matrix ( T A in this case). So, we’ll be finding a basis for the column space of T A in terms of the columns of T A , but the columns of T A are the rows of A and the column space of T A is the row space of A. Therefore, when this is all said and done by finding a basis for the column space of T A we will also be finding a basis for the row space of A and it will be in terms of rows from A and not rows from the row-echelon form of the matrix. So, here is the transpose of A. 1 4 2 3 2 4 0 1 1 4 6 7 5 12 2 2 6 8 4 12 T A − − ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ = − − − ⎢ ⎥ − − − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Here is a possible row-echelon form of the transpose (you should verify this). 5 4 1 4 2 3 0 1 1 0 0 0 1 0 0 0 0 0 0 0 0 U − − ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ The first, second and fourth columns of U form a basis for the column space of U and so a basis for the column space of T A is, Linear Algebra © 2007 Paul Dawkins 261 1 2 1 1 4 3 2 4 1 1 4 7 5 12 2 6 8 12 − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = = = − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ c c c Again, however, the column space of T A is nothing more than the row space of A and so these three column are rows from A and will also form a basis for the row space. So, let’s change notation a little to make it clear that we’re dealing with a basis for the row space and we’ll be done. Here is a basis for the row space of A in terms of rows from A itself. [ ] [ ] [ ] 1 2 3 1 2 1 5 6 4 4 4 12 8 3 1 7 2 12 = − − = − − − − = − − r r r Next we want to give a quick theorem that gives a relationship between the solution to a system of equations and the column space of the coefficient matrix. This theorem can be useful on occasion. Theorem 6 The system of linear equations A = x b will be consistent (i.e. have at least one solution) if and only if b is in the column space of A. Note that since the basis for the column space of a matrix is given in terms of the certain columns of A this means that a system of equations will be consistent if and only if b can be written as a linear combination of at least some of the columns of A. This should be clear from application of the Theorem above. This theorem tells us that b must be in the column space of A, but that means that it can be written as a linear combination of the basis vectors for the column space of A. We’ll close out this section with a couple of theorems relating the invertibility of a square matrix A to some of the ideas in this section. Theorem 7 Let A be an n n × matrix. The following statements are equivalent. (a) A is invertible. (b) The null space of A is { } 0 , i.e. just the zero vector. (c) ( ) nullity 0 A = . (d) ( ) rank A n = . (e) The columns vectors of A form a basis for n \ . (f) The row vectors of A form a basis for n \ . The proof of this theorem follows directly from Theorem 9 in the Properties of Determinants section and from the definitions of null space, rank and nullity so we’re not going to give it here. We will point our however that if the rank of an n n × matrix is n then a basis for the row (column) space must contain n vectors, but there are only n rows (columns) in A and so all the rows (columns) of A must be in the basis. Also, the row (column) space is a subspace of n \ which also has a dimension of n. These ideas are helpful in showing that (d) will imply either (e) or (f). Linear Algebra © 2007 Paul Dawkins 262 Finally, speaking of Theorem 9 in the Properties of Determinant section, this was also a theorem listing many equivalent statements on the invertibility of a matrix. We can merge that theorem with Theorem 7 above into the following theorem. Theorem 8 Let A be an n n × matrix. The following statements are equivalent. (a) A is invertible. (b) The only solution to the system 0 A = x is the trivial solution. (c) A is row equivalent to n I . (d) A is expressible as a product of elementary matrices. (e) A = x b has exactly one solution for every 1 n× matrix b. (f) A = x b is consistent for every 1 n× matrix b. (g) ( ) det 0 A ≠ (h) The null space of A is { } 0 , i.e. just the zero vector. (i) ( ) nullity 0 A = . (j) ( ) rank A n = . (k) The columns vectors of A form a basis for n \ . (l) The row vectors of A form a basis for n \ . Linear Algebra © 2007 Paul Dawkins 263 Inner Product Spaces If you go back to the Euclidean n-space chapter where we first introduced the concept of vectors you’ll notice that we also introduced something called a dot product. However, in this chapter, where we’re dealing with the general vector space, we have yet to introduce anything even remotely like the dot product. It is now time to do that. However, just as this chapter is about vector spaces in general, we are going to introduce a more general idea and it will turn out that a dot product will fit into this more general idea. Here is the definition of this more general idea. Definition 1 Suppose u, v, and w are all vectors in a vector space V and c is any scalar. An inner product on the vector space V is a function that associates with each pair of vectors in V, say u and v, a real number denoted by , u v that satisfies the following axioms. (a) , , = u v v u (b) , , , + = + u v w u w v w (c) , , c c = u v u v (d) , 0 ≥ u u and , 0 = u u if and only if 0 = u A vector space along with an inner product is called an inner product space. Note that we are assuming here that the scalars are real numbers in this definition. In fact we probably should have been using the terms “real vector space” and “real inner product space” in this definition to make it clear. If we were to allow the scalars to be complex numbers (i.e. dealing with a complex vector space) the axioms would change slightly. Also, in the rest of this section if we say that V is an inner product space we are implicitly assuming that it is a vector space and that some inner product has been defined on it. If we do not explicitly give the inner product then the exact inner product that we are using is not important. It will only be important in these cases that there has been an inner product defined on the vector space. Example 1 The Euclidean inner product as defined in the Euclidean n-space section is an inner product. For reference purposes here is the Euclidean inner product. Given two vectors in n \ , ( ) 1 2 , , , n u u u = u … and ( ) 1 2 , , , n v v v = v … , the Euclidean inner product is defined to be, 1 1 2 2 , n n u v u v u v = = + + + u v u v i " By Theorem 2 from the Euclidean n-space section we can see that this does in fact satisfy all the axioms of the definition. Therefore, n \ is an inner product space. Here are some more examples of inner products. Linear Algebra © 2007 Paul Dawkins 264 Example 2 Suppose that ( ) 1 2 , , , n u u u = u … and ( ) 1 2 , , , n v v v = v … are two vectors in n \ and that 1 w , 2 w , … , n w are positive real numbers (called weights) then the weighted Euclidean inner product is defined to be, 1 1 1 2 2 2 , n n n w u v w u v w u v = + + + u v " It is fairly simple to show that this is in fact an inner product. All we need to do is show that it satisfies all the axioms from Definition 1. So, suppose that u, v, and a are all vectors in n \ and that c is a scalar. First note that because we know that real numbers commute with multiplication we have, 1 1 1 2 2 2 1 1 1 2 2 2 , , n n n n n n w u v w u v w u v w v u w v u w v u = + + + = + + + = u v v u " " So, the first axiom is satisfied. To show the second axiom is satisfied we just need to run through the definition as follows, ( ) ( ) ( ) ( ) ( ) 1 1 1 1 2 2 2 2 1 1 1 2 2 2 1 1 1 2 2 2 , , , n n n n n n n n n n w u v a w u v a w u v a w u a w u a w u a w v a w v a w v a + = + + + + + + = + + + + + + + = + u v a u a v a " " " and the second axiom is satisfied. Here’s the work for the third axiom. ( ) 1 1 1 2 2 2 1 1 1 2 2 2 , , n n n n n n c w cu v w cu v w cu v c w u v w u v w u v c = + + + = + + + = u v u v " " Finally, for the fourth axiom there are two things we need to check. Here’s the first, 2 2 2 1 1 2 2 , 0 n n w u w u w u = + + + ≥ u u " Note that this is greater than or equal to zero because the weights 1 w , 2 w , … , n w are positive numbers. If we hadn’t made that assumption there would be no way to guarantee that this would be positive. Now suppose that , 0 = u u . Because each of the terms above is greater than or equal to zero the only way this can be zero is if each of the terms is zero itself. Again, however, the weights are positive numbers and so this means that 2 0 0, 1,2, , i i u u i n = ⇒ = = … We therefore must have 0 = u if , 0 = u u . Likewise if 0 = u then by plugging in we can see that we must also have , 0 = u u and so the fourth axiom is also satisfied. Linear Algebra © 2007 Paul Dawkins 265 Example 3 Suppose that 1 3 2 4 a a A a a ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ and 1 3 2 4 b b B b b ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ are two matrices in 22 M . An inner product on 22 M can be defined as, ( ) , tr T A B A B = where ( ) tr C is the trace of the matrix C. We will leave it to you to verify that this is in fact an inner product. This is not difficult once you show (you can do a direct computation to show this) that ( ) ( ) 1 1 2 2 3 3 4 4 tr tr T T A B B A a b a b a b a b = = + + + This formula is very similar to the Euclidean inner product formula and so showing that this is an inner product will be almost identical to showing that the Euclidean inner product is an inner product. There are differences, but for the most part it is pretty much the same. The next two examples require that you’ve had Calculus and so if you haven’t had Calculus you can skip these examples. Both of these however are very important inner products in some areas of mathematics, although we’re not going to be looking at them much here because of the Calculus requirement. Example 4 Suppose that ( ) f x = f and ( ) g x = g are two continuous functions on the interval [ ] , a b . In other words, they are in the vector space [ ] , C a b . An inner product on [ ] , C a b can be defined as, ( ) ( ) , b a f x g x dx = ∫ f g Provided you remember your Calculus, showing this is an inner product is fairly simple. Suppose that f, g, and h are continuous functions in [ ] , C a b and that c is any scalar. Here is the work showing the first axiom is satisfied. ( ) ( ) ( ) ( ) , , b b a a f x g x dx g x f x dx = = = ∫ ∫ f g g f The second axiom is just as simple, ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) , , , b a b b a a f x g x h x dx f x h x dx g x h x dx + = + = + = + ∫ ∫ ∫ f g h f h g h Here’s the third axiom. ( ) ( ) ( ) ( ) , , b b a a c c f x g x dx c f x g x dx c = = = ∫ ∫ f g f g Finally, the fourth axiom. This is the only one that really requires something that you may not remember from a Calculus class. The previous examples all used properties of integrals that you Linear Algebra © 2007 Paul Dawkins 266 should remember. First, we’ll start with the following, ( ) ( ) ( ) 2 , b b a a f x f x dx f x dx = = ∫ ∫ f f Now, recall that if you integrate a continuous function that is greater than or equal to zero then the integral must also be greater than or equal to zero. Hence, , 0 ≥ f f Next, if = f 0 the clearly we’ll have , 0 = f f . Likewise, if we have ( ) 2 , 0 b a f x dx = = ∫ f f then we must also have = f 0 . Example 5 Suppose that ( ) f x = f and ( ) g x = g are two vectors in [ ] , C a b and further suppose that ( ) 0 w x > is a continuous function called a weight. A weighted inner product on [ ] , C a b can be defined as, ( ) ( ) ( ) , b a f x g x w x dx = ∫ f g We’ll leave it to you to verify that this is an inner product. It should be fairly simple if you’ve had calculus and you followed the verification of the weighted Euclidean inner product. The key is again the fact that the weight is a strictly positive function on the interval [ ] , a b . Okay, once we have an inner product defined on a vector space we can define both a norm and distance for the inner product space as follows. Definition 2 Suppose that V is an inner product space. The norm or length of a vector u in V is defined to be, 1 2 , = u u u Definition 3 Suppose that V is an inner product space and that u and v are two vectors in V. The distance between u and v, denoted by ( ) , d u v is defined to be, ( ) , d = − u v u v We’re not going to be working many examples with actual numbers in them in this section, but we should work one or two so at this point let’s pause and work an example. Note that part (c) in the example below requires Calculus. If you haven’t had Calculus you should skip that part. Linear Algebra © 2007 Paul Dawkins 267 Example 6 For each of the following compute , u v , u and ( ) , d u v for the given pair of vectors and inner product. (a) ( ) 2, 1,4 = − u and ( ) 3,2,0 = v in 3 \ with the standard Euclidean inner product. [Solution] (b) ( ) 2, 1,4 = − u and ( ) 3,2,0 = v in 3 \ with the weighed Euclidean inner product using the weights 1 2 w = , 2 6 w = and 3 1 5 w = . [Solution] (c) x = u and 2 x = v in [ ] 0,1 C using the inner product defined in Example 4. [Solution] Solution (a) ( ) 2, 1,4 = − u and ( ) 3,2,0 = v in 3 \ with the standard Euclidean inner product. There really isn’t much to do here other than go through the formulas. ( )( ) ( )( ) ( )( ) , 2 3 1 2 4 0 4 = + − + = u v ( ) ( ) ( ) 1 2 2 2 2 , 2 1 4 21 = = + − + = u u u ( ) ( ) ( ) ( ) ( ) 2 2 2 , 1, 3,4 1 3 4 26 d = − = −− = − + − + = u v u v [Return to Problems] (b) ( ) 2, 1,4 = − u and ( ) 3,2,0 = v in 3 \ with the weighed Euclidean inner product using the weights 1 2 w = , 2 6 w = and 3 1 5 w = . Again, not a lot to do other than formula work. Note however, that even though we’ve got the same vectors as part (a) we should expect to get different results because we are now working with a weighted inner product. ( )( )( ) ( )( )( ) ( )( ) 1 , 2 3 2 1 2 6 4 0 0 5 ⎛ ⎞ = + − + = ⎜ ⎟ ⎝ ⎠ u v ( ) ( ) ( ) ( ) ( ) 1 2 2 2 2 1 86 , 2 2 1 6 4 17.2 5 5 ⎛ ⎞ = = + − + = = ⎜ ⎟ ⎝ ⎠ u u u ( ) ( ) ( ) ( ) ( ) ( ) ( ) 2 2 2 1 296 , 1, 3,4 1 2 3 6 4 59.2 5 5 d ⎛ ⎞ = − = −− = − + − + = = ⎜ ⎟ ⎝ ⎠ u v u v So, we did get different answers here. Note that in under this weighted norm u is “smaller” in some way than under the standard Euclidean norm and the distance between u and v is “larger” in some way than under the standard Euclidean norm. [Return to Problems] Linear Algebra © 2007 Paul Dawkins 268 (c) x = u and 2 x = v in [ ] 0,1 C using the inner product defined in Example 4. Okay, again if you haven’t had Calculus this part won’t make much sense and you should skip it. If you have had Calculus this should be a fairly simple example. ( ) 1 2 3 4 0 1 1 0 0 1 1 , 4 4 x x dx x dx x = = = = ∫ ∫ u v ( ) 1 1 2 3 2 0 1 1 0 0 1 1 , 3 3 x x dx x dx x = = = = = ∫ ∫ u u u ( ) ( ) 1 2 2 2 5 4 3 0 1 0 1 1 1 1 , 5 2 3 30 d x x x x dx x x x ⎛ ⎞ = − = − = − = − + = ⎜ ⎟ ⎝ ⎠ ∫ u v u v [Return to Problems] Now, we also have all the same properties for the inner product, norm and distance that we had for the dot product back in the Euclidean n-space section. We’ll list them all here for reference purposes and so you can see them with the updated inner product notation. The proofs for these theorems are practically identical to their dot product counterparts and so aren’t shown here. Theorem 1 Suppose u, v, and w are vectors in an inner product space and c is any scalar. Then, (a) , , , + = + u v w u v u w (b) , , , − = − u v w u w v w (c) , , , − = − u v w u v u w (d) , , c c = u v u v (e) , , 0 = = u 0 0 u Theorem 2 Cauchy-Schwarz Inequality : Suppose u and v are two vectors in an inner product space then , ≤ u v u v Theorem 3 Suppose u and v are two vectors in an inner product space and that c is a scalar then, (a) 0 ≥ u (b) 0 = u if and only if u=0. (c) c c = u u (d) + ≤ + u v u v - Usually called the Triangle Inequality Linear Algebra © 2007 Paul Dawkins 269 Theorem 4 Suppose u, v, and w are vectors in an inner product space then, (a) ( ) , 0 d ≥ u v (b) ( ) , 0 d = u v if and only if u=v. (c) ( ) ( ) , , d d = u v v u (d) ( ) ( ) ( ) , , , d d d ≤ + u v u w w v - Usually called the Triangle Inequality There was also an important concept that we saw back in the Euclidean n-space section that we’ll need in the next section. Here is the definition for this concept in terms of inner product spaces. Definition 4 Suppose that u and v are two vectors in an inner product space. They are said to be orthogonal if , 0 = u v . Note that whether or not two vectors are orthogonal will depend greatly on the inner product that we’re using. Two vectors may be orthogonal with respect to one inner product defined on a vector space, but not orthogonal with respect to a second inner product defined on the same vector space. Example 7 The two vectors ( ) 2, 1,4 = − u and ( ) 3,2,0 = v in 3 \ are not orthogonal with respect to the standard Euclidean inner product, but are orthogonal with respect to the weighted Euclidean inner product with weights 1 2 w = , 2 6 w = and 3 1 5 w = . We saw the computations for these back in Example 6. Now that we have the definition of orthogonality out of the way we can give the general version of the Pythagorean Theorem of an inner product space. Theorem 5 Suppose that u and v are two orthogonal vectors in an inner product space then, 2 2 2 + = + u v u v There is one final topic that we want to briefly touch on in this section. In previous sections we spent quite a bit of time talking about subspaces of a vector space. There are also subspaces that will only arise if we are working with an inner product space. The following definition gives one such subspace. Definition 5 Suppose that W is a subspace of an inner product space V. We say that a vector u from V is orthogonal to W if it is orthogonal to every vector in W. The set of all vectors that are orthogonal to W is called the orthogonal complement of W and is denoted by W ⊥. We say that W and W ⊥ are orthogonal complements. We’re not going to be doing much with the orthogonal complement in these notes, although they will show up on occasion. We just wanted to acknowledge that there are subspaces that are only going to be found in inner product spaces. Here are a couple of nice theorems pertaining to orthogonal complements. Linear Algebra © 2007 Paul Dawkins 270 Theorem 6 Suppose W is a subspace of an inner product space V. Then, (a) W ⊥ is a subspace of V. (b) Only the zero vector, 0, is common to both W and W ⊥. (c) ( ) W W ⊥ ⊥ = . Or in other words, the orthogonal complement of W ⊥ is W. Here is a nice theorem that relates some of the fundamental subspaces that we were discussing in the previous section. Theorem 7 If A is an n m × matrix then, (a) The null space of A and the row space of A are orthogonal complements in m \ with respect to the standard Euclidean inner product. (b) The null space of T A and the column space of A are orthogonal complements in n \ with respect to the standard Euclidean inner product. Linear Algebra © 2007 Paul Dawkins 271 Orthonormal Basis We now need to come back and revisit the topic of basis. We are going to be looking at a special kind of basis in this section that can arise in an inner product space, and yes it does require an inner product space to construct. However, before we do that we’re going to need to get some preliminary topics out of the way first. We’ll first need to get a set of definitions out of way. Definition 1 Suppose that S is a set of vectors in an inner product space. (a) If each pair of distinct vectors from S is orthogonal then we call S an orthogonal set. (b) If S is an orthogonal set and each of the vectors in S also has a norm of 1 then we call S an orthonormal set. Let’s take a quick look at an example. Example 1 Given the three vectors ( ) 1 2,0, 1 = − v , ( ) 2 0, 1,0 = − v and ( ) 3 2,0,4 = v in 3 \ answer each of the following. (a) Show that they form an orthogonal set under the standard Euclidean inner product for 3 \ but not an orthonormal set. [Solution] (b) Turn them into a set of vectors that will form an orthonormal set of vectors under the standard Euclidean inner product for 3 \ . [Solution] Solution (a) Show that they form an orthogonal set under the standard Euclidean inner product for 3 \ but not an orthonormal set. All we need to do here to show that they form an orthogonal set is to compute the inner product of all the possible pairs and show that they are all zero. ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) 1 2 1 3 2 3 , 2 0 0 1 1 0 0 , 2 2 0 0 1 4 0 , 0 2 1 0 0 4 0 = + − + − = = + + − = = + − + = v v v v v v So, they do form an orthogonal set. To show that they don’t form an orthonormal set we just need to show that at least one of them does not have a norm of 1. For the practice we’ll compute all the norms. ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 2 2 2 1 2 2 2 2 2 2 2 3 2 0 1 5 0 1 0 1 2 0 4 20 2 5 = + + − = = + − + = = + + = = v v v So, one of them has a norm of 1, but the other two don’t and so they are not an orthonormal set of vectors. [Return to Problems] Linear Algebra © 2007 Paul Dawkins 272 (b) Turn them into a set of vectors that will form an orthonormal set of vectors under the standard Euclidean inner product for 3 \ . We’ve actually done most of the work here for this part of the problem already. Back when we were working in n \ we saw that we could turn any vector v into a vector with norm 1 by dividing by its norm as follows, 1 v v This new vector will have a norm of 1. So, we can turn each of the vectors above into a set of vectors with norm 1. ( ) ( ) ( ) ( ) 1 1 1 2 2 2 3 3 3 1 1 2 1 2,0, 1 ,0, 5 5 5 1 1 0, 1,0 0, 1,0 1 1 1 1 2 2,0,4 ,0, 2 5 5 5 ⎛ ⎞ = = − = − ⎜ ⎟ ⎝ ⎠ = = − = − ⎛ ⎞ = = = ⎜ ⎟ ⎝ ⎠ u v v u v v u v v All that remains is to show that this new set of vectors is still orthogonal. We’ll leave it to you to verify that, 1 2 1 3 2 3 , , , 0 = = = u u u u u u and so we have turned the three vectors into a set of vectors that form an orthonormal set. [Return to Problems] We have the following very nice fact about orthogonal sets. Theorem 1 Suppose { } 1 2 , , , n S = v v v … is an orthogonal set of non-zero vectors in an inner product space, then S is also a set of linearly independent vectors. Proof : Note that we need the vectors to be non-zero vectors because the zero vector could be in a set of orthogonal vectors and yet we know that if a set includes the zero vector it will be linearly dependent. So, now that we know there is a chance that these vectors are linearly independent (since we’ve excluded the zero vector) let’s form the equation, 1 1 2 2 n n c c c + + + = v v v 0 " and we’ll need to show that the only scalars that work here are 1 0 c = , 2 0 c = , … , 0 n c = . In fact, we can do this in a single step. All we need to do is take the inner product of both sides with respect to i v , 1,2, , i n = … , and the use the properties of inner products to rearrange things a little. 1 1 2 2 1 1 2 2 1 1 2 2 , , , , , 0 , , , 0 n n i i i i n n i i i n n i c c c c c c c c c + + + = + + + = + + + = v v v v 0 v v v v v v v v v v v v v " " " Linear Algebra © 2007 Paul Dawkins 273 Now, because we know the vectors in S are orthogonal we know that , 0 i j = v v if i j ≠ and so this reduced down to, , 0 i i i c = v v Next, since we know that the vectors are all non-zero we have , 0 i i > v v and so the only way that this can be zero is if 0 i c = . So, we’ve shown that we must have 1 0 c = , 2 0 c = , … , 0 n c = and so these vectors are linearly independent. Okay, we are now ready to move into the main topic of this section. Since a set of orthogonal vectors are also linearly independent if they just happen to span the vector space we are working on they will also form a basis for the vector space. Definition 2 Suppose that { } 1 2 , , , n S = v v v … is a basis for an inner product space. (a) If S is also an orthogonal set then we call S an orthogonal basis. (b) If S is also an orthonormal set then we call S an orthonormal basis. Note that we’ve been using an orthonormal basis already to this point. The standard basis vectors for n \ are an orthonormal basis. The following fact gives us one of the very nice properties about orthogonal/orthonormal basis. Theorem 2 Suppose that { } 1 2 , , , n S = v v v … is an orthogonal basis for an inner product space and that u is any vector from the inner product space then, 1 2 1 2 2 2 2 1 2 , , , n n n = + + + u v u v u v u v v v v v v " If in addition S is in fact an orthonormal basis then, 1 1 2 2 , , , n n = + + + u u v v u v v u v v " Proof : We’ll just show that the first formula holds. Once we have that the second will follow directly from the fact that all the vectors in an orthonormal set have a norm of 1. So, given u we need to find scalars 1 c , 2 c , … , n c so that, 1 1 2 2 n n c c c = + + + u v v v " To find these scalars simply take the inner product of both sides with respect to i v , 1,2, , i n = … . 1 1 2 2 1 1 2 2 , , , , , i n n i i i n n i c c c c c c = + + + = + + + u v v v v v v v v v v v " " Linear Algebra © 2007 Paul Dawkins 274 Now, since we have an orthogonal basis we know that , 0 i j = v v if i j ≠ and so this reduces to, , , i i i i c = u v v v Also, because i v is a basis vector we know that it isn’t the zero vector and so we also know that , 0 i i > v v . This then gives us, , , i i i i c = u v v v However, from the definition of the norm we see that we can also write this as, 2 , i i i c = u v v and so we’re done. What this theorem is telling us is that for any vector in an inner product space, with an orthogonal/orthonormal basis, it is very easy to write down the linear combination of basis vectors for that vector. In other words, we don’t need to go through all the work to find the linear combinations that we were doing in earlier sections. We would like to be able to construct an orthogonal/orthonormal basis for a finite dimensional vector space given any basis of that vector space. The following two theorems will help us to do that. Theorem 3 Suppose that W is a finite dimensional subspace of an inner product space V and further suppose that u is any vector in V. Then u can be written as, proj proj W W ⊥ = + u u u where projW u is a vector that is in W and is called the orthogonal projection of u on W and projW ⊥u is a vector in W ⊥ (the orthogonal complement of W) and is called the component of u orthogonal to W. Note that this theorem is really an extension of the idea of projections that we saw when we first introduced the concept of the dot product. Also note that projW ⊥u can be easily computed from projW u by, proj projW W ⊥ = − u u u This theorem is not really the one that we need to construct an orthonormal basis. We will use portions of this theorem, but we needed it more to acknowledge that we could do projections and to get the notation out of the way. The following theorem is the one that will be the main workhorse of the process. Linear Algebra © 2007 Paul Dawkins 275 Theorem 4 Suppose that W is a finite dimensional subspace of an inner product space V. Further suppose that W has an orthogonal basis { } 1 2 , , , n S = v v v … and that u is any vector in V then, 1 2 1 2 2 2 2 1 2 , , , proj n W n n = + + + u v u v u v u v v v v v v " If in addition S is in fact an orthonormal basis then, 1 1 2 2 proj , , , W n n = + + + u u v v u v v u v v " So, just how does this theorem help us to construct an orthogonal/orthonormal basis? The following process, called the Gram-Schmidt process, will construct an orthogonal/orthonormal basis for a finite dimensional inner product space given any basis. We’ll also be able to develop some very nice facts about the basis that we’re going to be constructing as we go through the construction process. Gram-Schmidt Process Suppose that V is a finite dimensional inner product space and that { } 1 2 , , , n v v v … is a basis for V. The following process will construct an orthogonal basis for V, { } 1 2 , , , n u u u … . To find an orthonormal basis simply divide the i u ’s by their norms. Step 1 : Let 1 1 = u v . Step 2 : Let { } 1 1 span W = u and then define 1 2 2 projW ⊥ = u v (i.e. 2 u is the portion of 2 v that is orthogonal to 1 u ). Technically, this is all there is to step 2 (once we show that 2 ≠ u 0 anyway) since 2 u will be orthogonal to 1 u because it is in 1 W ⊥. However, this isn’t terribly useful from a computational standpoint. Using the result of Theorem 3 and the formula from Theorem 4 gives us the following formula for 2 u , 1 2 1 2 2 2 2 1 2 1 , projW = − = −v u u v v v u u Next, we need to verify that 2 ≠ u 0 because the zero vector cannot be a basis vector. To see that 2 ≠ u 0 assume for a second that we do have 2 = u 0 . This would give us, 2 1 2 1 2 1 1 1 1 2 2 1 1 , , since = = = v u v u v u v u v u u But this tells us that 2 v is a multiple of 1 v which we can’t have since they are both basis vectors and are hence linearly independent. So, 2 ≠ u 0. Finally, let’s observe an interesting consequence of how we found 2 u . Both 1 u and 2 u are orthogonal and so are linearly independent by Theorem 1 above and this means that they are a Linear Algebra © 2007 Paul Dawkins 276 basis for the subspace { } 2 1 2 span , W = u u and this subspace has dimension of 2. However, they are also linear combinations of 1 v and 2 v and so 2 W is a subspace of { } 1 2 span , v v which also has dimension 2. Therefore, by Theorem 9 from the section on Basis we can see that we must in fact have, { } { } 1 2 1 2 span , span , = u u v v So, the two new vectors, 1 u and 2 u , will in fact span the same subspace as the two original vectors, 1 v and 2 v , span. This is a nice consequence of the Gram-Schmidt process. Step 3 : This step is really an extension of Step 2 and so we won’t go into quite the detail here as we did in Step 2. First, define { } 2 1 2 span , W = u u and then define 2 3 3 projW ⊥ = u v and so 3 u will be the portion of 3 v that is orthogonal to both 1 u and 2 u . We can compute 3 u as follows, 2 3 1 3 2 3 3 3 3 1 2 2 2 1 2 , , projW = − = − − v u v u u v v v u u u u Next, both 1 u and 2 u are linear combinations of 1 v and 2 v and so 3 u can be thought of as a linear combination of 1 v , 2 v , and 3 v . Then because 1 v , 2 v , and 3 v are linearly independent we know that we must have 3 ≠ u 0 . You should probably go through the steps of verifying the claims made here for the practice. With this step we can also note that because 3 u is in the orthogonal complement of 2 W (by construction) and because we know that, { } { } 2 1 2 1 2 span , span , W = = u u v v from the previous step we know as well that 3 u must be orthogonal to all vectors in 2 W . In particular 3 u must be orthogonal to 1 v and 2 v . Finally, following an argument similar to that in Step 2 we get that, { } { } 1 2 3 1 2 3 span , , span , , = u u u v v v Step 4 : Continue in this fashion until we’ve found n u . There is the Gram-Schmidt process. Going through the process above, with all the explanation as we provided, can be a little daunting and can make the process look more complicated than it in fact is. Let’s summarize the process before we go onto a couple of examples. Linear Algebra © 2007 Paul Dawkins 277 Gram-Schmidt Process Suppose that V is a finite dimensional inner product space and that { } 1 2 , , , n v v v … is a basis for V then an orthogonal basis for V, { } 1 2 , , , n u u u … , can be found using the following process. 1 1 = u v 2 1 2 2 1 2 1 , = −v u u v u u 3 1 3 2 3 3 1 2 2 2 1 2 , , = − − v u v u u v u u u u # # 1 2 1 1 2 1 2 2 2 1 2 1 , , , n n n n n n n n − − − = − − − − v u v u v u u v u u u u u u " To convert the basis to an orthonormal basis simply divide all the new basis vectors by their norm. Also, due to the construction process we have { } { } 1 2 1 2 span , , , span , , , 1,2, , k k k n = = u u u v v v … … … and k u will be orthogonal to { } 1 2 1 span , , , k− v v v … for 2,3, k n = … . Okay, let’s go through a couple of examples here. Example 2 Given that ( ) 1 2, 1,0 = − v , ( ) 2 1,0, 1 = − v , and ( ) 3 3,7, 1 = − v is a basis of 3 \ and assuming that we’re working with the standard Euclidean inner product construct an orthogonal basis for 3 \ . Solution You should verify that the set of vectors above is in fact a basis for 3 \ . Now, we’ll need to go through the Gram-Schmidt process a couple of times. The first step is easy. ( ) 1 1 2, 1,0 = = − u v The remaining two steps are going to involve a little more work, but won’t be all that bad. Here is the formula for the second vector in our orthogonal basis. 2 1 2 2 1 2 1 , = −v u u v u u and here is all the quantities that we’ll need. 2 2 1 1 , 2 5 = = v u u The second vector is then, ( ) ( ) 2 2 1 2 1,0, 1 2, 1,0 , , 1 5 5 5 ⎛ ⎞ = − − − = − ⎜ ⎟ ⎝ ⎠ u Linear Algebra © 2007 Paul Dawkins 278 The formula for the third (and final vector) is, 3 1 3 2 3 3 1 2 2 2 1 2 , , = − − v u v u u v u u u u and here are the quantities that we need for this step. 2 2 3 1 3 2 1 2 22 6 , 1 , 5 5 5 = − = = = v u v u u u The third vector is then, ( ) ( ) 22 5 3 6 5 1 1 2 8 16 8 3,7, 1 2, 1,0 , , 1 , , 5 5 5 3 3 3 − ⎛ ⎞ ⎛ ⎞ = − − − − − = ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ u So, the orthogonal basis that we’ve constructed is, ( ) 1 2 3 1 2 8 16 8 2, 1,0 , , 1 , , 5 5 3 3 3 ⎛ ⎞ ⎛ ⎞ = − = − = ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ u u u You should verify that these do in fact form an orthogonal set. Example 3 Given that ( ) 1 2, 1,0 = − v , ( ) 2 1,0, 1 = − v , and ( ) 3 3,7, 1 = − v is a basis of 3 \ and assuming that we’re working with the standard Euclidean inner product construct an orthonormal basis for 3 \ . Solution First, note that this is almost the same problem as the previous one except this time we’re looking for an orthonormal basis instead of an orthogonal basis. There are two ways to approach this. The first is often the easiest way and that is to acknowledge that we’ve got a orthogonal basis and we can turn that into an orthonormal basis simply by dividing by the norms of each of the vectors. Let’s do it this way and see what we get. Here are the norms of the vectors from the previous example. 1 2 3 30 8 6 5 5 3 = = = u u u Note that in order to eliminate as many square roots as possible we rationalized the denominators of the fractions here. Dividing by the norms gives the following set of vectors. 1 2 3 2 1 1 2 5 1 2 1 , ,0 , , , , 5 5 30 30 30 6 6 6 ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ = − = − = ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ w w w Okay that’s the first way to do it. The second way is to go through the Gram-Schmidt process and this time divide by the norm as we find each new vector. This will have two effects. First, it will put a fair amount of roots into the vectors that we’ll need to work with. Second, because we are turning the new vectors into vectors with length one the norm in the Gram-Schmidt formula will also be 1 and so isn’t needed. Linear Algebra © 2007 Paul Dawkins 279 Let’s go through this once just to show you the differences. The first new vector will be, ( ) 1 1 1 1 1 2 1 2, 1,0 , ,0 5 5 5 ⎛ ⎞ = = − = − ⎜ ⎟ ⎝ ⎠ u v v Now, to get the second vector we first need to compute, 2 1 2 1 2 2 1 1 2 1 , , = − = − v u w v u v v u u u however we won’t call it 2 u yet since we’ll need to divide by it’s norm once we’re done. Also note that we’ve acknowledged that the norm of 1 u is 1 and so we don’t need that in the formula. Here is the dot product that we need for this step. 2 1 2 , 5 = v u Here is the new orthogonal vector. ( ) 2 2 1 1 2 1,0, 1 , ,0 , , 1 5 5 5 5 5 ⎛ ⎞ ⎛ ⎞ = − − − = − ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ w Notice that this is the same as the second vector we found in Example 2. In this case we’ll need to divide by its norm to get the vector that we want in this case. 2 1 5 1 2 1 2 5 , , 1 , , 5 5 30 30 30 30 ⎛ ⎞ ⎛ ⎞ = = − = − ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ u w w Finally, for the third orthogonal vector the formula will be, 3 3 1 1 3 2 2 , , = − − w v v u u v u u and again we’ve acknowledged that the norms of the first two vectors will be 1 and so aren’t needed in this formula. Here are the dot products that we’ll need. 3 1 3 2 1 22 , , 5 30 = − = v u v u The orthogonal vector is then, ( ) 1 2 1 22 1 2 5 8 16 8 3,7, 1 , ,0 , , , . 3 3 3 5 5 5 30 30 30 30 ⎛ ⎞⎛ ⎞ ⎛ ⎞ ⎛ ⎞ = − −− − − − = ⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠⎝ ⎠ ⎝ ⎠ w Again, this is the third orthogonal vector that we found in Example 2. Here is the final step to get our third orthonormal vector for this problem. 3 1 3 8 16 8 1 2 1 , , , , 3 3 3 8 6 6 6 6 ⎛ ⎞ ⎛ ⎞ = = = ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ u w w So, we got exactly the same vectors as if we did when we just used the results of Example 2. Of course that is something that we should expect to happen here. So, as we saw in the previous example there are two ways to get an orthonormal basis from any given basis. Each has its pros and cons and you’ll need to decide which method to use. If we Linear Algebra © 2007 Paul Dawkins 280 first compute the orthogonal basis and the divide all of them at then end by their norms we don’t have to work much with square roots, however we do need to compute norms that we won’t need otherwise. Again, it will be up to you to determine what the best method for you to use is. Example 4 Given that ( ) 1 1,1,1,1 = v , ( ) 2 1,1,1,0 = v , ( ) 3 1,1,0,0 = v and ( ) 4 1,0,0,0 = v is a basis of 4 \ and assuming that we’re working with the standard Euclidean inner product construct an orthonormal basis for 4 \ . Solution Now, we’re looking for an orthonormal basis and so we’ve got our two options on how to proceed here. In this case we’ll construct an orthogonal basis and then convert that into an orthonormal basis at the very end. The first vector is, ( ) 1 1 1,1,1,1 = = u v Here’s the dot product and norm we need for the second vector. 2 2 1 1 , 3 4 = = v u u The second orthogonal vector is then, ( ) ( ) 2 3 1 1 1 3 1,1,1,0 1,1,1,1 , , , 4 4 4 4 4 ⎛ ⎞ = − = − ⎜ ⎟ ⎝ ⎠ u For the third vector we’ll need the following dot products and norms 2 2 3 1 3 2 1 2 1 3 , 2 , 4 2 4 = = = = v u v u u u and the third orthogonal vector is, ( ) ( ) 1 2 3 3 4 2 1 1 1 3 1 1 2 1,1,0,0 1,1,1,1 , , , , , ,0 4 4 4 4 4 3 3 3 ⎛ ⎞ ⎛ ⎞ = − − − = − ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ u Finally, for the fourth orthogonal vector we’ll need, 4 1 4 2 4 3 2 2 2 1 2 3 1 1 , 1 , , 4 3 3 2 4 4 3 = = = = = = v u v u v u u u u and the fourth vector in out new orthogonal basis is, ( ) ( ) 1 1 3 4 4 3 2 4 3 1 1 1 1 3 1 1 2 1 1 1,0,0,0 1,1,1,1 , , , , , ,0 , ,0,0 4 4 4 4 4 3 3 3 2 2 ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ = − − − − − = − ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ u Okay, the orthogonal basis is then, ( ) 1 2 3 4 1 1 1 3 1 1 2 1 1 1,1,1,1 , , , , , ,0 , ,0,0 4 4 4 4 3 3 3 2 2 ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ = = − = − = − ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ u u u u Next, we’ll need their norms so we can turn this set into an orthonormal basis. Linear Algebra © 2007 Paul Dawkins 281 1 2 3 4 3 6 2 2 2 3 2 = = = = u u u u The orthonormal basis is then, 1 1 1 2 2 2 3 3 3 4 4 4 1 1 1 1 1 , , , 2 2 2 2 1 1 1 1 3 , , , 2 3 2 3 2 3 2 3 1 1 1 2 , , ,0 6 6 6 1 1 1 , ,0,0 2 2 ⎛ ⎞ = = ⎜ ⎟ ⎝ ⎠ ⎛ ⎞ = = − ⎜ ⎟ ⎝ ⎠ ⎛ ⎞ = = − ⎜ ⎟ ⎝ ⎠ ⎛ ⎞ = = − ⎜ ⎟ ⎝ ⎠ w u u w u u w u u w u u Now, we saw how to expand a linearly independent set of vectors into a basis for a vector space. We can do the same thing here with orthogonal sets of vectors and the Gram-Schmidt process. Example 5 Expand the vectors ( ) 1 2,0, 1 = − v and ( ) 2 2,0,4 = v into an orthogonal basis for 3 \ and assume that we’re working with the standard Euclidean inner product. Solution First notice that the two vectors are already orthogonal and linearly independent. Since they are linearly independent and we know that a basis for 3 \ will contain 3 vectors we know that we’ll only need to add in one more vector. Next, since they are already orthogonal that will simplify some of the work. Now, recall that in order to expand a linearly independent set into a basis for a vector space we need to add in a vector that is not in the span of the original vectors. Doing so will retain the linear independence of the set. Since both of these vectors have a zero in the second term we can add in any of the following to the set. ( ) ( ) ( ) ( ) 0,1,0 1,1,1 1,1,0 0,1,1 If we used the first one we’d actually have an orthogonal set without any work, but that would be boring and defeat the purpose of the example. To make our life at least somewhat easier with the work let’s add in the fourth on to get the set of vectors. ( ) ( ) ( ) 1 2 3 2,0, 1 2,0,4 0,1,1 = − = = v v v Now, we know these are linearly independent and since there are three vectors by Theorem 6 from the section on Basis we know that they form a basis for 3 \ . However, they don’t form an orthogonal basis. To get an orthogonal basis we would need to perform Gram-Schmidt on the set. However, since the first two vectors are already orthogonal performing Gram-Schmidt would not have any affect (you should verify this). So, let’s just rename the first two vectors as, Linear Algebra © 2007 Paul Dawkins 282 ( ) ( ) 1 2 2,0, 1 2,0,4 = − = u u and then just perform Gram-Schmidt for the third vector. Here are the dot products and norms that we’ll need. 2 2 3 1 3 2 1 2 , 1 , 4 5 20 = − = = = v u v u u u The third vector will then be, ( ) ( ) ( ) ( ) 3 1 4 0,1,1 2,0, 1 2,0,4 0,1,0 5 20 − = − − − = u Linear Algebra © 2007 Paul Dawkins 283 Least Squares In this section we’re going to take a look at an important application of orthogonal projections to inconsistent systems of equations. Recall that a system is called inconsistent if there are no solutions to the system. The natural question should probably arise at this point of just why we would care about this. Let’s take a look at the following examples that we can use to motivate the reason for looking into this. Example 1 Find the equation of the line that runs through the four points ( ) 1, 1 − , ( ) 4,11 , ( ) 1, 9 −− and ( ) 2, 13 − − . Solution So, what we’re looking for are the values of m and b for which the line, y mx b = + will run through the four points given above. If we plug these points into the line we arrive at the following system of equations. 1 4 11 9 2 13 m b m b m b m b + = − + = − + = − − + = − The corresponding matrix form of this system is, 1 1 1 4 1 11 1 1 9 2 1 13 m b − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ Solving this system (either the matrix form or the equations) gives us the solution, 4 5 m b = = − So, the line 4 5 y x = − will run through the three points given above. Note that this makes this a consistent system. Example 2 Find the equation of the line that runs through the four points( ) 3,70 − , ( ) 1,21 , ( ) 7,110 − and ( ) 5, 35 − . Solution So, this is essentially the same problem as in the previous example. Here are the system of equations and matrix form of the system of equations that we need to solve for this problem. 3 70 3 1 70 21 1 1 21 7 110 7 1 110 5 35 5 1 35 m b m b m m b b m b − + = − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ + = ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − + = − ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ + = − − ⎣ ⎦ ⎣ ⎦ Now, try as we might we won’t find a solution to this system and so this system is inconsistent. Linear Algebra © 2007 Paul Dawkins 284 The previous two examples were asking for pretty much the same thing and in the first example we were able to answer the question while in the second we were not able to answer the question. It is the second example that we want to look at a little closer. Here is a graph of the points given in this example. We can see that these points do almost fall on a line. Without the reference line that we put into the sketch it would not be clear that these points did not fall onto a line and so asking the question that we did was not totally unreasonable. Let’s further suppose that the four points that we have in this example came from some experiment and we know for some physical reason that the data should all lie on a straight line. However, due to inaccuracies in the measuring equipment caused some (or all) of the numbers to be a little off. In light of this the question in Example 2 is again not unreasonable and in fact we may still need to answer it in some way. That is the point of this section. Given this set of data can we find the equation of a line that will as closely as possible (whatever this means…) approximate each of the data points. Or more generally, given an inconsistent system of equations, A = x b , can we find a vector, let’s call it x , so that Ax will be as close to b as possible (again, what ever this means…). To answer this question let’s step back a bit and take a look at the general situation. So, we will suppose that we have an inconsistent system of n equations in m unknowns, A = x b , so the coefficient matrix, A, will have size n m × . Let’s rewrite the system a little and make the following definition. A ε = − b x We will call ε the error vector and we’ll call A ε = − b x the error since it will measure the distance between Ax and b for any vector x in m \ (there are m unknowns and so x will be in m \ ). Note that we’re going to be using the standard Euclidean inner product to compute the norm in these cases. The least squares problem is then the following problem. Least Square Problem Given an inconsistent system of equations, A = x b , we want to find a vector, x , from m \ so that the error A ε = − b x is the smallest possible error. The vector x is called the least squares solution. Linear Algebra © 2007 Paul Dawkins 285 Solving this problem is actually easier than it might look at first. The first thing that we’ll want to do is look at a more general situation. The following theorem will be useful in solving the least squares problem. Theorem 1 Suppose that W is a finite dimensional subspace of an inner product space V and that u is any vector in V. The best approximation to u from W is then projW u . By best approximation we mean that for every w (that is not projW u ) in W we will have, projW − < − u u u w Proof : For any vector w in W we can write. ( ) ( ) proj proj W W − = − + − u w u u u w Notice that projW − u w is a difference of vectors in W and hence must also be in W. Likewise, projW − u u is in fact projW ⊥u , the component of u orthogonal to W, and so is orthogonal to any vector in W. Therefore projW − u w and projW − u u are orthogonal vectors. So, by the Pythagorean Theorem we have, ( ) ( ) proj proj proj proj W W W W − = − + − = − + − u w u u u w u u u w Or, upon dropping the middle term, proj proj W W − = − + − u w u u u w Finally, if we have projW ≠ w u then we know that proj 0 W − > u w and so if we drop this term we get, projW − > − u w u u which is what we wanted to prove. So, just what does this theorem do for us? Well for any vector x in m \ we know that Ax will be a linear combination of the column vectors from A. Now, let W be the subspace of n \ (yes, n \ since each column of A has n entries) that is spanned by the column vectors of A. Then Ax will not only be in W (since it’s a linear combination of the column vectors) but as we let x range over all possible vectors in m \ Ax will range over all of W. Now, the least squares problem is asking us to find the vector x in m \ , we’re calling it x , so that ε is smaller than (i.e. smaller norm) than all other possible values of ε , i.e. ε ε < . If we plug in for the definition of the errors we arrive at. A A − < − b x b x With the least squares problem we are looking for the closest that we can get Ax to b. However, this is exactly the type of situation that Theorem 1 is telling us how to solve. The Ax range over Linear Algebra © 2007 Paul Dawkins 286 all possible vectors in W and we want the one that is closed to some vector b in n \ . Theorem 1 tells us that the one that we’re after is, projW A = x b Of course we are actually after x and not Ax but this does give us one way to find x . We could first compute projW b and then solve projW A = x b for x and we’d have the solution that we’re after. There is however a better way of doing this. Before we give that theorem however, we’ll need a quick fact. Theorem 2 Suppose that A is an n m × matrix with linearly independent columns. Then, T A A is an invertible matrix. Proof : From Theorem 8 in the Fundamental Subspaces section we know that if T A A = x 0 has only the trivial solution then T A A will be an invertible matrix. So, let’s suppose that T A A = x 0 . This tells us that Ax is in the null space of T A , but we also know that Ax is in the column space of A. Theorem 7 from the section on Inner Product Spaces tells us that these two spaces are orthogonal complements and Theorem 6 from the same section tells us that the only vector in common to both must be the zero vector and so we know that A = x 0 . If 1 c , 2 c , … , m c are the columns of A then we know that Ax can be written as, 1 1 2 2 m m A x x x = + + + x c c c " Then using A = x 0 we also know that, 1 1 2 2 m m A x x x = + + + = x c c c 0 " However, since the columns of A are linearly independent this equations can only have the trivial solution, = x 0 . Therefore T A A = x 0 has only the trivial solution and so T A A is an invertible matrix. The following theorem will now give us a better method for finding the least squares solution to a system of equations. Theorem 3 Given the system of equations A = x b , a least squares solution to the system denoted by x , will also be a solution to the associated normal system, T T A A A = x b Further if A has linearly independent columns then there is a unique least squares solution given by, ( ) 1 T T A A A − = x b Linear Algebra © 2007 Paul Dawkins 287 Proof : Let’s suppose that x is a least squares solution and so, projW A = x b Now, let’s consider, projW A − = − b x b b However as pointed out in the proof of Theorem 1 we know that projW − b b is in the orthogonal complement of W. Next, W is the column space of A and by Theorem 7 from the section on Inner Product Spaces we know that the orthogonal complement of the column space of A is in fact the null space of T A and so, projW − b b must be in the null space of T A . So, we must then have, ( ) ( ) proj T T W A A A − = − = b b b x 0 Or, with a little rewriting we arrive at, T T A A A = x b and so we see that x must also be a solution to the normal system of equations. For the second part we don’t have much to do. If the columns of A are linearly independent then T A A is invertible by Theorem 2 above. However, by Theorem 8 in the Fundamental Subspaces section this means that T T A A A = x b has a unique solution. To find the unique solution we just need to multiply both sides by the inverse of T A A. So, to find a least squares solution to A = x b all we need to do is solve the normal system of equations, T T A A A = x b and we will have a least squares solution. Now we should work a couple of examples. We’ll start with Example 2 from above. Example 3 Use least squares to find the equation of the line that will best approximate the points ( ) 3,70 − , ( ) 1,21 , ( ) 7,110 − and ( ) 5, 35 − . Solution The system of equations that we need to solve from Example 2 is, 3 1 70 1 1 21 7 1 110 5 1 35 m b − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ So, we have, Linear Algebra © 2007 Paul Dawkins 288 3 1 70 1 1 3 1 7 5 21 7 1 1 1 1 1 110 5 1 35 T A A − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ b The normal system that we need to solve is then, 3 1 70 3 1 7 5 1 1 3 1 7 5 21 1 1 1 1 7 1 1 1 1 1 110 5 1 35 84 4 1134 4 4 166 m b m b − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − − − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ − − ⎡ ⎤⎡ ⎤ ⎡ ⎤ = ⎢ ⎥⎢ ⎥ ⎢ ⎥ − ⎣ ⎦⎣ ⎦ ⎣ ⎦ This is a fairly simple system to solve and upon doing so we get, 121 147 12.1 29.4 10 5 m b = − = − = = So, the line that best approximates all the points above is given by, 12.1 29.4 y x = − + The sketch of the line and points after Example 2 above shows this line in relation to the points. Example 4 Find the least squares solution to the following system of equations. 1 2 3 2 1 1 4 1 5 2 2 3 1 4 5 1 1 1 1 x x x − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ − − ⎣ ⎦ ⎣ ⎦ Solution Okay there really isn’t much to do here other than run through the formula. Here are the various matrices that we’ll need here. 2 1 1 4 2 1 3 1 1 5 2 2 1 5 1 1 3 1 4 5 1 2 4 1 1 1 1 1 T A A − − ⎡ ⎤ ⎡ ⎤ − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = = − − − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ − − ⎣ ⎦ ⎣ ⎦ b 15 11 17 22 11 28 16 0 17 16 22 21 T T A A A − − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = − − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ b The normal system if then, Linear Algebra © 2007 Paul Dawkins 289 1 2 3 15 11 17 22 11 28 16 0 17 16 22 21 x x x − − ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − − = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦⎣ ⎦ ⎣ ⎦ This system is a little messier to solve than the previous example, but upon solving we get, 1 2 3 18 151 107 7 210 210 x x x = − = − = In vector form the least squares solution is then, 18 7 151 210 107 210 − ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ x We need to address one more issues before we move on to the next section. When we opened this discussion up we said that we were after a solution, denoted x , so that Ax will be as close to b as possible in some way. We then defined, A A ε ε = − = − b x b x and stated that what we meant by as close to b as possible was that we wanted to find the x for which, ε ε < for all ≠ x x . Okay, this is all fine in terms of mathematically defining what we mean by “as close as possible”, but in practical terms just what are we asking for here? Let’s go back to Example 3. For this example the general formula for ε is, ( ) ( ) ( ) ( ) 1 2 3 4 64 3 64 3 1 21 21 1 1 110 7 110 7 1 35 5 35 5 1 m b m b m A m b b m b ε ε ε ε ε −− + ⎡ ⎤ − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − + ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = − = − = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ −− + − ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − + − ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ b x So, the components of the error vector, ε , each measure just how close each possible choice of m and b will get us to the exact answer (which is given by the components of b). We can also think about this in terms of the equation of the line. We’ve been given a set of points ( ) , i i x y and we want to determine an m and a b so that when we plug i x , the x coordinate or our point, into mx b + the error, ( ) i i i y mx b ε = − + is as small as possible (in some way that we’re trying to figure out here) for all the points that we’ve been given. Then if we plug in the points that we’ve been given we’ll see that this formula is nothing more than the components of the error vector. Now, in the case of our example we were looking for, Linear Algebra © 2007 Paul Dawkins 290 m b ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ x so that, A ε = − b x is as small as possible, or in other words is smaller than all other possible choices of x. We can now answer just what we mean by “as small as possible”. First, let’s compute the following, 2 2 2 2 2 1 2 3 4 ε ε ε ε ε = + + + The least squares solution, x , will be the value of x for which, 2 2 2 2 2 2 2 2 2 2 1 2 3 4 1 2 3 4 ε ε ε ε ε ε ε ε ε ε = + + + < + + + = and hence the name “least squares”. The solution we’re after is the value that will give the least value of the sum of the squares of the errors. Example 5 Compute the error for the solution from Example 3. Solution First, the line that we found using least squares is, 12.1 29.4 y x = − + We can compute the errors for each of the points by plugging in the given x value into this line and then taking the difference of the result form the equation and the known y value. Here are the error computations for each of the four points in Example 3. ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 2 3 4 70 12.1 3 29.4 4.3 21 12.1 1 29.4 3.7 110 12.1 7 29.4 4.1 35 12.1 5 29.4 3.9 ε ε ε ε = −− − + = = −− + = = −− − + = − = − −− + = − On a side note, we could have just as easily computed these by doing the following matrix work. 70 3 1 4.3 21 1 1 12.1 3.7 110 7 1 29.4 4.1 35 5 1 3.9 ε − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ The square of the error and the error is then, ( ) ( ) ( ) ( ) 2 2 2 2 2 4.3 3.7 4.1 3.9 64.2 64.2 8.0125 ε ε = + + − + − = ⇒ = = Now, according to our discussion above this means that if we choose any other value of m and b and compute the error we will arrive at a value that is larger that 8.0125. Linear Algebra © 2007 Paul Dawkins 291 QR­Decomposition In this section we’re going to look at a way to “decompose” or “factor” an n m × matrix as follows. Theorem 1 Suppose that A is an n m × matrix with linearly independent columns then A can be factored as, A QR = where Q is an n m × matrix with orthonormal columns and R is an invertible m m × upper triangular matrix. Proof : The proof here will consist of actually constructing Q and R and showing that they in fact do multiply to give A. Okay, let’s start with A and suppose that it’s columns are given by 1 c , 2 c , … , m c . Also suppose that we perform the Gram-Schmidt process on these vectors and arrive at a set of orthonormal vectors 1 u , 2 u , … , m u . Next, define Q (yes, the Q in the theorem statement) to be the n m × matrix whose columns are 1 u , 2 u , … , m u and so Q will be a matrix with orthonormal columns. We can then write A and Q as, [ ] [ ] 1 2 1 2 m m A Q = = c c c u u u " " Next, because each of the i c ’s are in { } 1 2 span , , m u u u … we know from Theorem 2 of the previous section that we can write each i c as a linear combination of 1 u , 2 u , … , m u in the following manner. 1 1 1 1 1 2 2 1 2 2 1 1 2 2 2 2 1 1 2 2 , , , , , , , , , m m m m m m m m m m = + + + = + + + = + + + c c u u c u u c u u c c u u c u u c u u c c u u c u u c u u " " # # " Next, define R (and yes, this will eventually be the R from the theorem statement) to be the m m × matrix defined as, 1 1 2 1 1 1 2 2 2 2 1 2 , , , , , , , , , m m m m m m R ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ c u c u c u c u c u c u c u c u c u " " # # % # " Now, let’s examine the product, QR. [ ] 1 1 2 1 1 1 2 2 2 2 1 2 1 2 , , , , , , , , , m m m m m m m QR ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ c u c u c u c u c u c u u u u c u c u c u " " " # # % # " Linear Algebra © 2007 Paul Dawkins 292 From the section on Matrix Arithmetic we know that the jth column of this product is simply Q times the jth column of R. However, if you work through a couple of these you’ll see that when we multiply Q times the jth column of R we arrive at the formula for j c that we’ve got above. In other words, [ ] [ ] 1 1 2 1 1 1 2 2 2 2 1 2 1 2 1 2 , , , , , , , , , m m m m m m m m QR A ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ = = c u c u c u c u c u c u u u u c u c u c u c c c " " " # # % # " " So, we can factor A as a product of Q and R and Q has the correct form. Now all that we need to do is to show that R is an invertible upper triangular matrix and we’ll be done. First, from the Gram-Schmidt process we know that k u is orthogonal to 1 c , 2 c , … , 1 k c −. This means that all the inner products below the main diagonal must be zero since they are all of the form , i j c u with i j < . Now, we know from Theorem 2 from the Special Matrices section that a triangular matrix will be invertible if the main diagonal entries, , i i c u , are non-zero. This is fairly easy to show. Here is the general formula for i u from the Gram-Schmidt process. 1 1 2 2 1 1 , , , i i i i i i i − − = − − − − u c c u u c u u c u u " Recall that we’re assuming that we found the orthonormal i u ’s and so each of these will have a norm of 1 and so the norms are not needed in the formula. Now, solving this for i c gives, 1 1 2 2 1 1 , , , i i i i i i i − − = + + + + c u c u u c u u c u u " Let’s look at the diagonal entries of R. We’ll plug in the formula for i c into the inner product and do some rewriting using the properties of the inner product. 1 1 2 2 1 1 1 1 2 2 1 1 , , , , , , , , , , , , i i i i i i i i i i i i i i i i i i i − − − − = + + + + = + + + + c u u c u u c u u c u u u u u c u u u c u u u c u u u " " However the i u are orthonormal basis vectors and so we know that , 0 1,2, 1 , 0 j i i i j i = = − ≠ u u u u … Using these we see that the diagonal entries are nothing more than, , , 0 i i i i = ≠ c u u u So, the diagonal entries of R are non-zero and hence R must be invertible. Linear Algebra © 2007 Paul Dawkins 293 So, now that we’ve gotten the proof out of the way let’s work an example. Example 1 Find the QR-decomposition for the matrix, 2 1 3 1 0 7 0 1 1 A ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ Solution The columns from A are, 1 2 1 2 1 3 1 0 7 0 1 1 ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = − = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ c c c We performed Gram-Schmidt on these vectors in Example 3 of the previous section. So, the orthonormal vectors that we’ll use for Q are, 2 1 1 5 30 6 1 2 2 1 2 3 5 30 6 5 1 30 6 0 ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = − = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ u u u and the matrix Q is, 2 1 1 5 30 6 1 2 2 5 30 6 5 1 30 6 0 Q ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎣ ⎦ The matrix R is, 2 1 5 5 1 1 2 1 3 1 6 22 2 2 3 2 30 30 16 3 3 6 5 , , , 0 , , 0 0 0 , 0 0 R ⎡ ⎤ − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎢ ⎥ ⎣ ⎦ c u c u c u c u c u c u So, the QR-Decomposition for this matrix is, 2 1 1 2 1 5 30 6 5 5 6 1 2 2 22 5 30 6 30 30 5 16 1 30 6 6 5 2 1 3 1 0 7 0 0 1 1 0 0 0 ⎡ ⎤ ⎡ ⎤ − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − = − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦⎣ ⎦ We’ll leave it to you to verify that this multiplication does in fact give A. There is a nice application of the QR-Decomposition to the Least Squares Process that we examined in the previous section. To see this however, we will first need to prove a quick theorem. Linear Algebra © 2007 Paul Dawkins 294 Theorem 2 If Q is an n m × matrix with n m ≥ then the columns of Q are an orthonormal set of vectors in n \ with the standard Euclidean inner product if and only if T m Q Q I = . Note that the only way Q can have orthonormal columns in n \ is to require that n m ≥ . Because the columns are vectors in n \ and we know from Theorem 1 in the Orthonormal Basis section that a set of orthogonal vectors will also be linearly independent. However, from Theorem 2 in the Linear Independence section we know that if m n > the column vectors will be linearly dependent. Also, because we want to make it clear that we’re using the standard Euclidean inner product we will go back to the dot product notation, u v i , instead of the usual inner product notation, , u v . Proof : Now recall that to prove an “if and only if” theorem we need to assume each part and show that this implies the other part. However, there is some work that we can do that we’ll need in both parts so let’s do that first. Let 1 q , 2 q , … , m q be the columns of Q. So, [ ] 1 2 m Q = q q q " For the transpose we take the columns of Q and turn them into the rows of T Q . Therefore, the rows of T Q are 1 T q , 2 T q , … , T m q (the transposes are needed to turn the column vectors, i q , into row vectors, T i q ) and, T 1 T 2 T m T Q ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ q q q # Now, let’s take a look at the product T Q Q . Entries in the product will be rows of T Q times columns of Q and so the product will be, 1 1 1 2 1 2 1 2 2 2 1 2 T T T m T T T T m T T T m m m m Q Q ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ q q q q q q q q q q q q q q q q q q " " # # # " Recalling that T = u v v u i we see that we can also write the product as, 1 1 1 2 1 2 1 2 2 2 1 2 m m T m m m m Q Q ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ q q q q q q q q q q q q q q q q q q i i " i i i " i # # # i i " i Linear Algebra © 2007 Paul Dawkins 295 Now, let’s actually do the proof. ( ) ⇒Assume that the columns of Q are orthogonal and show that this means that we must have T m Q Q I = . Since we are assuming that the columns of Q are orthonormal we know that, 0 1 , 1,2, i j i i i j i j m = ≠ = = q q q q i i … Therefore the product is, 1 0 0 0 1 0 0 0 1 T m Q Q I ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ " " # # # " So we’re done with this part. ( ) ⇐ Here assume that T m Q Q I = and we’ll need to show that this means that the columns of Q are orthogonal. So, we’re assuming that, 1 1 1 2 1 2 1 2 2 2 1 2 1 0 0 0 1 0 0 0 1 m m T m m m m m Q Q I ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ q q q q q q q q q q q q q q q q q q i i " i " i i " i " # # # # # # i i " i " However, simply by setting entries in these two matrices equal we see that, 0 1 , 1,2, i j i i i j i j m = ≠ = = q q q q i i … and this is exactly what it means for the columns to be orthogonal so we’re done. The following theorem can be used, on occasion, to significantly reduce the amount of work required for the least squares problem. Theorem 3 Suppose that A has linearly independent columns. Then the normal system associated with A = x b can be written as, T R Q = x b Proof : There really isn’t much to do here other than plug formulas in. We’ll start with the normal system for A = x b . T T A A A = x b Now, A has linearly independent columns we know that it has a QR-Decomposition for A so let’s plug the decomposition into the normal system and using properties of transposes we’ll rewrite things a little. Linear Algebra © 2007 Paul Dawkins 296 ( ) ( ) T T T T T T QR QR QR R Q QR R Q = = x b x b Now, since the columns of Q are orthonormal we know that T m Q Q I = by Theorem 2 above. Also, we know that R is an invertible matrix and so know that T R is also an invertible matrix. So, we’ll also multiply both sides by ( ) 1 T R −. Upon doing all this we arrive at, T R Q = x b So, just how is this supposed to help us with the Least Squares Problem? We’ll since R is upper triangular this will be a very easy system to solve. It can however take some work to get down to this system. Let’s rework the last example from the previous section only this time we’ll use the QR-Decomposition method. Example 2 Find the least squares solution to the following system of equations. 1 2 3 2 1 1 4 1 5 2 2 3 1 4 5 1 1 1 1 x x x − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ − − ⎣ ⎦ ⎣ ⎦ Solution First, we’ll leave it to you to verify that the columns of A are linearly independent. Here are the columns of A. 1 2 3 2 1 1 1 5 2 3 1 4 1 1 1 − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ c c c Now, we’ll need to perform Gram-Schmidt on these to get them into a set of orthonormal vectors. The first step is, 1 1 2 1 3 1 ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥ − ⎢ ⎥ ⎣ ⎦ u c Here’s the inner product and norm that we’ll need for the second step. 2 2 1 1 , 11 15 = − = c u u The second vector is then, Linear Algebra © 2007 Paul Dawkins 297 7 15 64 15 2 1 2 2 1 2 6 5 1 4 15 1 2 5 1 , 11 1 3 15 1 1 − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = − = − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ c u u c u u The final step will require the following inner products and norms. 2 2 3 1 3 2 1 2 53 299 , 17 , 15 15 15 = = − = = c u c u u u The third, and final orthogonal vector is then, 3 1 3 2 3 3 1 2 2 2 1 2 7 354 15 299 64 33 53 15 299 15 6 243 299 5 299 15 54 4 15 299 , , 1 2 2 1 17 4 3 15 1 1 = − − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = − − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ c u c u u c u u u u Okay, these are the orthogonal vectors. If we divide each of them by their norms we will get the orthonormal vectors that we need for the decomposition. The norms are, 1 2 3 4485 3 20930 15 15 299 = = = u u u The orthonormal vectors are then, 7 118 2 15 4485 20930 64 1 11 15 4485 20930 1 2 3 3 18 81 15 4485 20930 18 1 4 15 4485 20930 − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ w w w We can now write down Q for the decomposition. 7 118 2 15 4485 20930 64 1 11 15 4485 20930 3 18 81 15 4485 20930 18 1 4 15 4485 20930 Q − ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ − − − ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ Finally, R is given by, Linear Algebra © 2007 Paul Dawkins 298 17 11 15 15 1 1 2 1 3 1 299 53 2 2 3 2 4485 4485 210 3 3 20930 15 , , , 0 , , 0 0 0 , 0 0 R ⎡ ⎤ − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = = − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎢ ⎥ ⎣ ⎦ c u c u c u c u c u c u Okay, we can now proceed with the Least Squares process. First we’ll need T Q . 3 2 1 1 15 15 15 15 7 64 18 4 4485 4485 4485 4485 118 81 18 11 20930 20930 20930 20930 T Q ⎡ ⎤ − ⎢ ⎥ = − − − ⎢ ⎥ ⎢ ⎥ − − − ⎢ ⎥ ⎣ ⎦ The normal system can then be written as, 17 3 11 2 1 1 15 15 15 15 15 15 1 299 53 7 64 18 4 2 4485 4485 4485 4485 4485 4485 210 118 81 18 11 3 20930 20930 20930 20930 20930 17 11 15 15 299 53 4485 4485 210 2093 4 15 2 0 5 0 0 1 15 0 0 0 x x x − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ − − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − = − − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ − ⎣ ⎦ − − 22 15 1 242 2 4485 107 3 0 20930 x x x ⎡ ⎤ ⎡ ⎤ − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ This corresponds to the following system of equations. 1 2 3 1 2 3 2 3 3 11 17 22 18 15 7 15 15 15 299 53 242 151 210 4485 4485 4485 210 107 107 210 20930 20930 x x x x x x x x x − + = − ⇒ = − − = − ⇒ = − = ⇒ = These are the same values that we received in the previous section. At this point you are probably asking yourself just why this method is better than the method we used in the previous section. After all, it was a lot of work and some of the numbers were downright awful. The answer is that by hand, this may not be the best way for doing these problems, however, if you are going to program the least squares method into a computer all of the steps here are very easy to program and so this method is a very nice method for programming the Least Squares process. Linear Algebra © 2007 Paul Dawkins 299 Orthogonal Matrices In this section we’re going to be talking about a special kind of matrix called an orthogonal matrix. This is also going to be a fairly short section (at least in relation to many of the other sections in this chapter anyway) to close out the chapter. We’ll start with the following definition. Definition 1 Let Q be a square matrix and suppose that 1 T Q Q −= then we call Q an orthogonal matrix. Notice that because we need to have an inverse for Q in order for it to be orthogonal we are implicitly assuming that Q is a square matrix here. Before we see any examples of some orthogonal matrices (and we have already seen at least one orthogonal matrix) let’s get a couple of theorems out of the way. Theorem 1 Suppose that Q is a square matrix then Q is orthogonal if and only if T T QQ Q Q I = = . Proof : This is a really simple proof that falls directly from the definition of what it means for a matrix to be orthogonal. ( ) ⇒ In this direction we’ll assume that Q is orthogonal and so we know that 1 T Q Q −= , but this promptly tells us that, T T QQ Q Q I = = ( ) ⇐In this direction we’ll assume that T T QQ Q Q I = = , since this is exactly what is needed to show that we have an inverse we can see that 1 T Q Q −= and so Q is orthogonal. The next theorem gives us an easier check for a matrix being orthogonal. Theorem 2 Suppose that Q is an n n × matrix, then the following are all equivalent. (a) Q is orthogonal. (b) The columns of Q are an orthonormal set of vectors in n \ under the standard Euclidean inner product. (c) The rows of Q are an orthonormal set of vectors in n \ under the standard Euclidean inner product. Proof : We’ve actually done most of this proof already. Normally in this kind of theorem we’d prove a loop of equivalences such as ( ) ( ) ( ) ( ) a b c a ⇒ ⇒ ⇒ . However, in this case if we prove ( ) ( ) a b ⇔ and ( ) ( ) a c ⇔ we can get the above loop of equivalences by default and it will be much easier to prove the two equivalences as we’ll see. The equivalence ( ) ( ) a b ⇔ is directly given by Theorem 2 from the previous section since that theorem is in fact a more general version of this equivalence. Linear Algebra © 2007 Paul Dawkins 300 The proof of the equivalence ( ) ( ) a c ⇔ is nearly identical to the proof of Theorem 2 from the previous section and so we’ll leave it to you to fill in the details. Since it is much easier to verify that the columns/rows of a matrix or orthonormal than it is to check 1 T Q Q −= in general this theorem will be useful for identifying orthogonal matrices. As noted above, in order for a matrix to be an orthogonal matrix it must be square. So a matrix that is not square, but does have orthonormal columns will not be orthogonal. Also, note that we did mean to say that the columns are orthonormal. This may seem odd given that we call the matrix “orthogonal” when “orthonormal” would probably be a better name for the matrix, but traditionally this kind of matrix has been called orthogonal and so we’ll keep up with tradition. In the previous section we were finding QR-Decompositions and if you recall the matrix Q had columns that were a set of orthonormal vectors and so if Q is a square matrix then it will also be an orthogonal matrix, while if it isn’t square then it won’t be an orthogonal matrix. At this point we should probably do an example or two. Example 1 Here are the QR-Decompositions that we performed in the previous section. From Example 1 2 1 1 2 1 5 30 6 5 5 6 1 2 2 22 5 30 6 30 30 5 16 1 30 6 6 5 2 1 3 1 0 7 0 0 1 1 0 0 0 A QR ⎡ ⎤ ⎡ ⎤ − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = − = − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦⎣ ⎦ From Example 2 7 118 2 15 4485 20930 17 11 15 15 64 1 11 15 4485 20930 299 53 4485 4485 3 18 81 15 4485 20930 210 20930 18 1 4 15 4485 20930 2 1 1 15 1 5 2 0 3 1 4 0 0 1 1 1 A QR − ⎡ ⎤ − ⎡ ⎤ ⎡ ⎤ − ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ = = − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ − ⎣ ⎦ − − ⎣ ⎦ In the first case the matrix Q is, 2 1 1 5 30 6 1 2 2 5 30 6 5 1 30 6 0 Q ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎣ ⎦ and by construction this matrix has orthonormal columns and since it is a square matrix it is an orthogonal matrix. In the second case the matrix Q is, Linear Algebra © 2007 Paul Dawkins 301 7 118 2 15 4485 20930 64 1 11 15 4485 20930 3 18 81 15 4485 20930 18 1 4 15 4485 20930 Q − ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ − − − ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ Again, by construction this matrix has orthonormal columns. However, since it is not a square matrix it is NOT an orthogonal matrix. Example 2 Find value(s) for a, b, and c for which the following matrix will be orthogonal. 2 3 1 2 3 5 2 1 3 5 0 a Q b c ⎡ ⎤ − ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎣ ⎦ Solution So, the columns of Q are, 2 3 1 2 1 2 3 3 5 1 2 3 5 0 a b c ⎡ ⎤ − ⎡ ⎤ ⎡⎤ ⎢ ⎥ ⎢ ⎥ ⎢⎥ = = = ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ − ⎣⎦ ⎣ ⎦ ⎢ ⎥ ⎣ ⎦ q q q We will leave it to you to verify that 1 1 = q , 2 1 = q and 1 2 0 = q q i and so all we need to do if find a, b, and c for which we will have 3 1 = q , 1 3 0 = q q i and 2 3 0 = q q i . Let’s start with the two dot products and see what we get. 1 3 2 3 2 0 5 5 2 2 1 0 3 3 3 b c a b c = − = = − + + = q q q q i i From the first dot product we can see that, 2 b c = . Plugging this into the second dot product gives us, 2 5 c a = . Using the fact that we now know what c is in terms of a and plugging this into 2 b c = we can see that 4 5 b a = . Now, using the above work we now know that in order for the third column to be orthogonal (since we haven’t even touched orthonormal yet) it must be in the form, 4 5 2 5 a a a ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ w Finally, we need to make sure that then third column has norm of 1. In other words we need to Linear Algebra © 2007 Paul Dawkins 302 require that 1 = w , or we can require that 2 1 = w since we know that the norm must be a positive quantity here. So, let’s compute 2 w , set it equal to one and see what we get. 2 2 2 2 2 16 4 21 5 1 25 25 25 21 a a a a a = = + + = ⇒ = ± w This gives us two possible values of a that we can use and this in turn means that we could used either of the following two vectors for 3 q 5 5 21 21 4 4 3 3 21 21 2 2 21 21 OR ⎡ ⎤ ⎡ ⎤ − ⎢ ⎥ ⎢ ⎥ = = − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ q q A natural question is why do we care about orthogonal matrices? The following theorem gives some very nice properties of orthogonal matrices. Theorem 3 If Q is an n n × matrix then the following are all equivalent. (a) Q is orthogonal. (b) Q = x x for all x in n \ . This is often called preserving norms. (c) Q Q = x y x y i i for all x and all y in n \ . This is often called preserving dot products. Proof : We’ll prove this set of statements in the order : ( ) ( ) ( ) ( ) a b c a ⇒ ⇒ ⇒ ( ) ( ) a b ⇒ : We’ll start off by assuming that Q is orthogonal and let’s write down the norm. ( ) 1 2 Q Q Q = x x x i However, we know that we can write the dot product as, ( ) 1 2 T Q Q Q = x x x i Now we can use the fact that Q is orthogonal to write T Q Q I = . Using this gives, ( ) 1 2 Q = = x x x x i which is what we were after. ( ) ( ) b c ⇒ : We’ll assume that Q = x x for all x in n \ . Let’s assume that x and y are any two vectors in n \ . Then using Theorem 8 from the section on Euclidean n-space we have, ( ) ( ) 2 2 2 2 1 1 1 1 4 4 4 4 Q Q Q Q Q Q Q Q = + − − == + − − x y x y x y x y x y i Next both + x y and − x y are in n \ and so by assumption and a use of Theorem 8 again we have, 2 2 1 1 4 4 Q Q = + − − = x y x y x y x y i i which is what we were after in this case. Linear Algebra © 2007 Paul Dawkins 303 ( ) ( ) c a ⇒ : In this case we’ll assume that Q Q = x y x y i i for all x and all y in n \ . As we did in the first part of this proof we’ll rewrite the dot product on the left. T Q Q = x y x y i i Now, rearrange things a little and we can arrive at the following, ( ) ( ) 0 0 0 T T T Q Q Q Q Q Q I − = − = − = x y x y x y y x y i i i i Now, this must hold for a x in n \ and so let ( ) T Q Q I = − x y . This then gives, ( ) ( ) 0 T T Q Q I Q Q I − − = y y i Theorem 2(e) from the Euclidean n-space section tells us that we must then have, ( ) T Q Q I − = y 0 and this must be true for all y in n \ . That can only happen if the coefficient matrix of this system is the zero matrix or, T T Q Q I Q Q I − = ⇒ = 0 Finally, by Theorem 1 above Q must be orthogonal. The second and third statement for this theorem are very useful since they tell us that we can add or take out an orthogonal matrix from a norm or a dot product at will and we’ll preserve the result. As a final theorem for this section here are a couple of other nice properties of orthogonal matrices. Theorem 4 Suppose that A and B are two orthogonal n n × matrices then, (a) 1 A− is an orthogonal matrix. (b) AB is an orthogonal matrix. (c) Either ( ) det 1 A = or ( ) det 1 A = − Proof : The proof of all three parts follow pretty much from basic properties of orthogonal matrices. (a) Since A is orthogonal then its column vectors form an orthogonal (in fact they are orthonormal, but we only need orthogonal for this) set of vectors. Now, by the definition of orthogonal matrices, we have 1 T A A −= . But this means that the rows of 1 A− are nothing more than the columns of A and so are an orthogonal set of vectors and so by Theorem 2 above 1 A− is an orthogonal matrix. (b) In this case let’s start with the following norm. ( ) AB A B = x x Linear Algebra © 2007 Paul Dawkins 304 where x is any vector from n \ . But A is orthogonal and so by Theorem 3 above must preserve norms. In other words we must have, ( ) AB A B B = = x x x Now we can use the fact that B is also orthogonal and so will preserve norms a well. This gives, AB B = = x x x Therefore, the product AB also preserves norms and hence by Theorem 3 must be orthogonal. (c) In this case we’ll start with fact that since A is orthogonal we know that T AA I = and let’s take the determinant of both sides, ( ) ( ) det det 1 T AA I = = Next use Theorem 3 and Theorem 6 from the Properties of Determinants section to rewrite this as, ( ) ( ) ( ) ( ) ( ) ( ) 2 det det 1 det det 1 det 1 det 1 T A A A A A A = = = ⇒ = ± ⎡ ⎤ ⎣ ⎦ So, we get the result. Linear Algebra © 2007 Paul Dawkins 305 Eigenvalues and Eigenvectors Introduction This is going to be a very short chapter. The main topic of this chapter will be the Eigenvalues and Eigenvectors section. In this section we will be looking at special situations where given a square matrix A and a vector x the product Ax will be the same as the scalar multiplication λx for some scalar, λ . This idea has important applications in many areas of math and science and so we put it into a chapter of its own. We’ll also have a quick review of determinants since those will be required in order to due the work in the Eigenvalues and Eigenvectors section. We’ll also take a look at an application that uses eigenvalues. Here is a listing of the topics in this chapter. Review of Determinants – In this section we’ll do a quick review of determinants. Eigenvalues and Eigenvectors – Here we will take a look at the main section in this chapter. We’ll be looking at the concept of Eigenvalues and Eigenvectors. Diagonalization – We’ll be looking at diagonalizable matrices in this section. Linear Algebra © 2007 Paul Dawkins 306 Review of Determinants In this section we are going to do a quick review of determinants and we’ll be concentrating almost exclusively on how to compute them. For a more in depth look at determinants you should check out the second chapter which is devoted to determinants and their properties. Also, we’ll acknowledge that the examples in this section are all examples that were worked in the second chapter. We’ll start off with a quick “working” definition of a determinant. See The Determinant Function from the second chapter for the exact definition of a determinant. What we’re going to give here will be sufficient for what we’re going to be doing in this chapter. So, given a square matrix, A, the determinant of A, denoted by ( ) det A , is a function that associated with A a number. That’s it. That’s what a determinant does. It takes a matrix and associates a number with that matrix. There is also some alternate notation that we should acknowledge because we’ll be using it quite a bit. The alternate notation is, ( ) det A A = . We now need to discuss how to compute determinants. There are many ways of computing determinants, but most of the general methods can lead to some fairly long computations. We will see one general method towards the end of this section, but there are some nice quick formulas that can help with some special cases so we’ll start with those. We’ll be working mostly with matrices in this chapter that fit into these special cases. We will start with the formulas for 2 2 × and 3 3 × matrices. Definition 1 If 11 21 12 22 a a A a a ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ then the determinant of A is, ( ) 11 21 11 22 12 21 12 22 det a a A a a a a a a = = − Definition 2 If 11 12 13 21 22 23 31 32 33 a a a A a a a a a a ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ then the determinant of A is, ( ) 11 12 13 21 22 23 31 32 33 11 22 33 12 23 31 13 21 32 12 21 33 11 23 32 13 22 31 det a a a A a a a a a a a a a a a a a a a a a a a a a a a a = = + + − − − Okay, we said that these were “nice” and “quick” formulas and the formula for the 2 2 × matrix is fairly nice and quick, but the formula for the 3 3 × matrix is neither nice nor quick. Luckily there are some nice little “tricks” that can help us to write down both formulas. Linear Algebra © 2007 Paul Dawkins 307 We’ll start with the following determinant of a 2 2 × matrix and we’ll sketch in two diagonals as shown Note that if you multiply along the green diagonal you will get the first product in formula for 2 2 × matrices and if you multiply along the red diagonal you will get the second product in the formula. Also, notice that the red diagonal, running from right to left, was the product that was subtracted off, while the green diagonal, running from left to right, gave the product that was added. We can do something similar for 3 3 × matrices, but there is a difference. First, we need to tack a copy of the leftmost two columns onto the right side of the determinant. We then have three diagonals that run from left to right (shown in green below) and three diagonals that run from right to left (shown in red below). As will the 2 2 × case, if we multiply along the green diagonals we get the products in the formula that are added in the formula and if we multiply long the red diagonals we get the products in the formula that are subtracted in the formula. Here are a couple of quick examples. Example 1 Compute the determinant of each of the following matrices. (a) 3 2 9 5 A ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ [Solution] (b) 3 5 4 2 1 8 11 1 7 B ⎡ ⎤ ⎢ ⎥ = − − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ [Solution] (c) 2 6 2 2 8 3 3 1 1 C − ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ [Solution] Solution (a) 3 2 9 5 A ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ We don’t really need to sketch in the diagonals for 2 2 × matrices. The determinant is simply the product of the diagonal running left to right minus the product of the diagonal running from right to left. So, here is the determinant for this matrix. The only thing we need to worry about is paying attention to minus signs. It is easy to make a mistake with minus signs in these computations if you aren’t paying attention. ( ) ( )( ) ( )( ) det 3 5 2 9 33 A = − − = Linear Algebra © 2007 Paul Dawkins 308 [Return to Problems] (b) 3 5 4 2 1 8 11 1 7 B ⎡ ⎤ ⎢ ⎥ = − − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Okay, with this one we’ll copy the two columns over and sketch in the diagonals to make sure we’ve got the idea of these down. Now, just remember to add products along the left to right diagonals and subtract products along the right to left diagonals. ( ) ( )( )( ) ( )( )( ) ( )( )( ) ( )( )( ) ( )( )( ) ( )( )( ) det 3 1 7 5 8 11 4 2 1 5 2 7 3 8 1 4 1 11 467 B = − + − + − − − − − − − = − [Return to Problems] (c) 2 6 2 2 8 3 3 1 1 C − ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ We’ll do this one with a little less detail. We’ll copy the columns but not bother to actually sketch in the diagonals this time. ( ) ( )( )( ) ( )( )( ) ( )( )( ) ( )( )( ) ( )( )( ) ( )( )( ) 2 6 2 2 6 det 2 8 3 2 8 3 1 1 3 1 2 8 1 6 3 3 2 2 1 6 2 1 2 3 1 2 8 3 0 C − − = − − − − = − + − − + −− − − − − = [Return to Problems] As we can see from this example the determinant for a matrix can be positive, negative or zero. Likewise, as we will see towards the end of this review we are going to be especially interested in when the determinant of a matrix is zero. Because of this we have the following definition. Definition 3 Suppose A is a square matrix. (a) If ( ) det 0 A = we call A a singular matrix. (b) If ( ) det 0 A ≠ we call A a non-singular matrix. So, in Example 1 above, both A and B are non-singular while C is singular. Linear Algebra © 2007 Paul Dawkins 309 Before we proceed we should point out that while there are formulas for larger matrices (see the Determinant Function section for details on how to write them down) there are not any easy tricks with diagonals to write them down as we had for 2 2 × and 3 3 × matrices. With the statement above made we should note that there is a simple formula for general matrices of certain kinds. The following theorem gives this formula. Theorem 1 Suppose that A is an n n × triangular matrix with diagonal entries 11 a , 22 a , … , nn a the determinant of A is, ( ) 11 22 det nn A a a a = " This theorem will be valid regardless of whether the triangular matrix is an upper triangular matrix or a lower triangular matrix. Also, because a diagonal matrix can also be considered to be a triangular matrix Theorem 1 is also valid for diagonal matrices. Here are a couple of quick examples of this. Example 2 Compute the determinant of each of the following matrices. 10 5 1 3 5 0 0 6 0 0 0 4 9 0 3 0 2 1 0 0 6 4 0 0 4 0 0 0 5 A B C ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = − = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ Solution Here are these determinants. ( ) ( )( )( ) ( ) ( )( ) ( ) ( )( )( )( ) det 5 3 4 60 det 6 1 6 det 10 0 6 5 0 A B C = − = − = − = − = = There are several methods for finding determinants in general. One of them is the Method of Cofactors. What follows is a very brief overview of this method. For a more detailed discussion of this method see the Method of Cofactors in the Determinants Chapter. We’ll start with a couple of definitions first. Definition 4 If A is a square matrix then the minor of i j a , denoted by i j M , is the determinant of the submatrix that results from removing the ith row and jth column of A. Definition 5 If A is a square matrix then the cofactor of i j a , denoted by i j C , is the number ( ) 1 i j i j M + − . Here is a quick example showing some minor and cofactor computations. Linear Algebra © 2007 Paul Dawkins 310 Example 3 For the following matrix compute the cofactors 12 C , 24 C , and 32 C . 4 0 10 4 1 2 3 9 5 5 1 6 3 7 1 2 A ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ − − ⎢ ⎥ − ⎣ ⎦ Solution In order to compute the cofactors we’ll first need the minor associated with each cofactor. Remember that in order to compute the minor we will remove the ith row and jth column of A. So, to compute 12 M (which we’ll need for 12 C ) we’ll need to compute the determinate of the matrix we get by removing the 1st row and 2nd column of A. Here is that work. We’ve marked out the row and column that we eliminated and we’ll leave it to you to verify the determinant computation. Now we can get the cofactor. ( ) ( ) ( ) 1 2 3 12 12 1 1 160 160 C M + = − = − = − Let’s now move onto the second cofactor. Here is the work for the minor. The cofactor in this case is, ( ) ( ) ( ) 2 4 6 24 24 1 1 508 508 C M + = − = − = Here is the work for the final cofactor. ( ) ( ) ( ) 3 2 5 32 32 1 1 150 150 C M + = − = − = − Notice that the cofactor for a given entry is really just the minor for the same entry with a “+1” or a “-1” in front of it. The following “table” shows whether or not there should be a “+1” or a “-1” in front of a minor for a given cofactor. Linear Algebra © 2007 Paul Dawkins 311 + − + − ⎡ ⎤ ⎢ ⎥ − + − + ⎢ ⎥ ⎢ ⎥ + − + − ⎢ ⎥ − + − + ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ " " " " # # # # % To use the table for the cofactor i j C we simply go to the ith row and jth column in the table above and if there is a “+” there we leave the minor alone and if there is a “-” there we will tack a “-1” onto the appropriate minor. So, for 34 C we go to the 3rd row and 4th column and see that we have a minus sign and so we know that 34 34 C M = − . Here is how we can use cofactors to compute the determinant of any matrix. Theorem 2 If A is an n n × matrix. (a) Choose any row, say row i, then, ( ) 1 1 2 2 det i i i i in in A a C a C a C = + +" (b) Choose any column, say column j, then, ( ) 1 1 2 2 det j j j j n j n j A a C a C a C = + + + " Here is a quick example of how to use this theorem. Example 4 For the following matrix compute the determinant using the given cofactor expansions. 4 2 1 2 6 3 7 5 0 A ⎡ ⎤ ⎢ ⎥ = − − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ (a) Expand along the first row. [Solution] (b) Expand along the third row. [Solution] (c) Expand along the second column. [Solution] Solution First, notice that according to the theorem we should get the same result in all three parts. (a) Expand along the first row. Here is the cofactor expansion in terms of symbols for this part. ( ) 11 11 12 12 13 13 det A a C a C a C = + + Now, let’s plug in for all the quantities. We will just plug in for the entries. For the cofactors we’ll write down the minor and a “+1” or a “-1” depending on which sign each minor needs. We’ll determine these signs by going to our “sign matrix” above starting at the first entry in the particular row/column we’re expanding along and then as we move along that row or column we’ll write down the appropriate sign. Linear Algebra © 2007 Paul Dawkins 312 Here is the work for this expansion. ( ) ( )( ) ( )( ) ( )( ) ( ) ( ) ( )( ) 6 3 2 3 2 6 det 4 1 2 1 1 1 5 0 7 0 7 5 4 15 2 21 1 52 154 A − − − − = + + − + + − − = − − + − = − We’ll leave it to you to verify the 2 2 × determinant computations. [Return to Problems] (b) Expand along the third row. We’ll do this one without all the explanations. ( ) ( )( ) ( )( ) ( )( ) ( ) ( ) ( )( ) 31 31 32 32 33 33 det 2 1 4 1 4 2 7 1 5 1 0 1 6 3 2 3 2 6 7 12 5 14 0 20 154 A a C a C a C = + + − + + − + + − − − − = − − + − = − So, the same answer as the first part which is good since that was supposed to happen. Notice that the signs for the cofactors in this case were the same as the signs in the first case. This is because the first and third row of our “sign matrix” are identical. Also, notice that we didn’t really need to compute the third cofactor since the third entry was zero. We did it here just to get one more example of a cofactor into the notes. [Return to Problems] (c) Expand along the second column. Let’s take a look at the final expansion. In this one we’re going down a column and notice that from our “sign matrix” that this time we’ll be starting the cofactor signs off with a “-1” unlike the first two expansions. ( ) ( )( ) ( )( ) ( )( ) ( ) ( ) ( ) 12 12 22 22 32 32 det 2 3 4 1 4 1 2 1 6 1 5 1 7 0 7 0 2 3 2 21 6 7 5 14 154 A a C a C a C = + + − − + − + + − − − − = − − − = − Again, the same as the first two as we expected. [Return to Problems] As this example has show it doesn’t matter which row or column we expand along we will always get the same result. Linear Algebra © 2007 Paul Dawkins 313 In this example we performed a cofactor expansion on a 3 3 × since we could easily check the results using the process we discussed above. Let’s work one more example only this time we’ll find the determinant of a 4 4 × matrix and so we’ll not have any choice but to use a cofactor expansion. Example 5 Using a cofactor expansion compute the determinant of, 5 2 2 7 1 0 0 3 3 1 5 0 3 1 9 4 A − ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ − ⎢ ⎥ − − ⎣ ⎦ Solution Since the row or column to use for the cofactor expansion was not given in the problem statement we get to choose which one we want to use. From the previous example we know that it won’t matter which row or column we choose.. However, having said that notice that if there is a zero entry we won’t need to compute the cofactor/minor for that entry since it will just multiply out to zero. So, it looks like the second row would be a good choice for the expansion since it has two zeroes in it. Here is the expansion for this row. As with the previous expansions we’ll explicitly give the “+1” or “-1” for the cofactors and the minors as well so you can see where everything in the expansion is coming from. ( ) ( )( ) ( )( ) ( )( ) ( )( ) 22 23 2 2 7 5 2 2 det 1 1 1 5 0 0 1 0 1 3 1 3 1 5 1 9 4 3 1 9 A M M − − = − + + + − + + − − − − We didn’t bother to write down the minors 22 M and 23 M because of the zero entry. How we choose to compute the determinants for the first and last entry is up to us at this point. We could use a cofactor expansion on each of them or we could use the technique we saw above. Either way will get the same answer and we’ll leave it to you to verify these determinants. The determinant for this matrix is, ( ) ( ) ( ) det 76 3 4 88 A = −− + = We’ll close this review off with a significantly shortened version of Theorem 9 from Properties of Determinants section. We won’t need most of the theorem, but there are two bits of it that we’ll need so here they are. Also, there are two ways in which the theorem can be stated now that we’ve stripped out the other pieces and so we’ll give both ways of stating it here. Theorem 3 If A is an n n × matrix then (a) The only solution to the system 0 A = x is the trivial solution (i.e. 0 = x ) if and only if ( ) det 0 A ≠ . (b) The system 0 A = x will have a non-trivial solution (i.e. 0 ≠ x ) if and only if ( ) det 0 A = . Linear Algebra © 2007 Paul Dawkins 314 Note that these two statements really are equivalent. Also, recall that when we say “if and only if” in a theorem statement we mean that the statement works in both directions. For example, let’s take a look at the second part of this theorem. This statement says that if 0 A = x has non-trivial solutions then we know that we’ll also have ( ) det 0 A = . On the other hand, it also says that if ( ) det 0 A = then we’ll also know that the system will have non-trivial solutions. This theorem will be key to allowing us to work problems in the next section. This is then the review of determinants. Again, if you need a more detailed look at either determinants or their properties you should go back and take a look at the Determinant chapter. Linear Algebra © 2007 Paul Dawkins 315 Eigenvalues and Eigenvectors As noted in the introduction to this chapter we’re going to start with a square matrix A and try to determine vectors x and scalars λ so that we will have, A λ = x x In other words, when this happens, multiplying x by the matrix A will be equivalent of multiplying x by the scalarλ . This will not be possible for all vectors, x, nor will it be possible for all scalars λ . The goal of this section is to determine the vectors and scalars for which this will happen. So, let’s start off with the following definition. Definition 1 Suppose that A is an n n × matrix. Also suppose that x is a non-zero vector from n \ and that λ is any scalar (this can be zero) so that, A λ = x x We then call x an eigenvector of A and λ an eigenvalue of A. We will often call x the eigenvector corresponding to or associated with λ and we will often call λ the eigenvalue corresponding to or associated with x. Note that eigenvalues and eigenvectors will always occur in pairs. You can’t have an eigenvalue without an eigenvector and you can’t have an eigenvector without an eigenvalue. Example 1 Suppose 6 16 1 4 A ⎡ ⎤ = ⎢ ⎥ − − ⎣ ⎦ then 8 1 − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ x is an eigenvector with corresponding eigenvalue 4 because 6 16 8 32 8 4 1 4 1 4 1 A λ − − − ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤ = = = = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦ x x Okay, what we need to do is figure out just how we can determine the eigenvalues and eigenvectors for a given matrix. This is actually easier to do that it might at first appear to be. We’ll start with finding the eigenvalues for a matrix and once we have those we’ll be able to find the eigenvectors corresponding to each eigenvalue. Let’s start with A λ = x x and rewrite it as follows, A I λ = x x Note that all we did was insert the identity matrix into the right side. Doing this will allow us to further rewrite this equation as follows, ( ) I A I A λ λ − = − = x x 0 x 0 Now, if λ is going to be an eigenvalue of A, this system must have a non-zero solution, x, since we know that eigenvectors associated with λ cannot be the zero vector. However, Theorem 3 Linear Algebra © 2007 Paul Dawkins 316 from the previous section or more generally Theorem 8 from the Fundamental Subspaces section tells us that this system will have a non-zero solution if and only if ( ) det 0 I A λ − = So, eigenvalues will be scalars, λ , for which the matrix I A λ − will be singular, i.e. ( ) det 0 I A λ − = . Let’s get a couple more definitions out of the way and then we’ll work some examples of finding eigenvalues. Definition 2 Suppose A is an n n × matrix then, ( ) det 0 I A λ − = is called the characteristic equation of A. When computed it will be an nth degree polynomial in λ of the form, ( ) 1 1 1 0 n n n p c c c λ λ λ λ − − = + + + + " called the characteristic polynomial of A. Note that the coefficient of n λ is 1 (one) and that is NOT a typo in the definition. This also guarantees that this polynomial will be of degree n. Also, from the Fundamental Theorem of Algebra we now know that there will be exactly n eigenvalues (possibly including repeats) for an n n × matrix A. Note that because the Fundamental Theorem of Algebra does allow for the possibility of repeated eigenvalues there will be at most n distinct eigenvalues for an n n × matrix. Because an eigenvalue can repeat itself in the list of all eigenvalues we’d like a way to differentiate between eigenvalues that repeat and those that don’t repeat. The following definition will do this for us. Definition 3 Suppose A is an n n × matrix and that 1 λ , 2 λ , … , n λ is the complete list of all the eigenvalues of A including repeats. If λ occurs exactly once in this list then we call λ a simple eigenvalue. If λ occurs 2 k ≥ times in the list we say that λ has multiplicity of k. Okay, let’s find some eigenvalues we’ll start with some 2 2 × matrices. Example 2 Find all the eigenvalues for the given matrices. (a) 6 16 1 4 A ⎡ ⎤ = ⎢ ⎥ − − ⎣ ⎦ [Solution] (b) 4 2 3 5 A − ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ [Solution] (c) 7 1 4 3 A − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ [Solution] Solution (a) 6 16 1 4 A ⎡ ⎤ = ⎢ ⎥ − − ⎣ ⎦ We’ll do this one with a little more detail than we’ll do the other two. First we’ll need the matrix I A λ − . Linear Algebra © 2007 Paul Dawkins 317 0 6 16 6 16 0 1 4 1 4 I A λ λ λ λ λ − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ − = − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − + ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ Next we need the determinant of this matrix, which gives us the characteristic polynomial. ( ) ( )( ) ( ) 2 det 6 4 16 2 8 I A λ λ λ λ λ − = − + −− = − − Now, set this equal to zero and solve for the eigenvalues. ( )( ) 2 1 2 2 8 4 2 0 2, 4 λ λ λ λ λ λ − − = − + = ⇒ = − = So, we have two eigenvalues and since they occur only once in the list they are both simple eigenvalues. [Return to Problems] (b) 4 2 3 5 A − ⎡ ⎤ = ⎢ ⎥ − ⎣ ⎦ Here is the matrix I A λ − and its characteristic polynomial. ( ) 2 4 2 det 9 14 3 5 I A I A λ λ λ λ λ λ + − ⎡ ⎤ − = − = + + ⎢ ⎥ − + ⎣ ⎦ We’ll leave it to you to verify both of these. Now, set the characteristic polynomial equal to zero and solve for the eigenvalues. ( )( ) 2 1 2 9 14 7 2 0 7, 2 λ λ λ λ λ λ + + = + + = ⇒ = − = − Again, we get two simple eigenvalues. [Return to Problems] (c) 7 1 4 3 A − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ Here is the matrix I A λ − and its characteristic polynomial. ( ) 2 7 1 det 10 25 4 3 I A I A λ λ λ λ λ λ − ⎡ ⎤ − = − = − + ⎢ ⎥ − − ⎣ ⎦ We’ll leave it to you to verify both of these. Now, set the characteristic polynomial equal to zero and solve for the eigenvalues. ( ) 2 2 1,2 10 25 5 0 5 λ λ λ λ − + = − = ⇒ = In this case we have an eigenvalue of multiplicity two. Sometimes we call this kind of eigenvalue a double eigenvalue. Notice as well that we used the notation 1,2 λ to denote the fact that this was a double eigenvalue. [Return to Problems] Now, let’s take a look at some 3 3 × matrices. Linear Algebra © 2007 Paul Dawkins 318 Example 3 Find all the eigenvalues for the given matrices. (a) 4 0 1 1 6 2 5 0 0 A ⎡ ⎤ ⎢ ⎥ = − − − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ [Solution] (b) 6 3 8 0 2 0 1 0 3 A − ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ [Solution] (c) 0 1 1 1 0 1 1 1 0 A ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ [Solution] (d) 4 0 1 0 3 0 1 0 2 A − ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ [Solution] Solution (a) 4 0 1 1 6 2 5 0 0 A ⎡ ⎤ ⎢ ⎥ = − − − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ As with the previous example we’ll do this one in a little more detail than the remaining two parts. First, we’ll need 1,2 λ , 0 0 4 0 1 4 0 1 0 0 1 6 2 1 6 2 0 0 5 0 0 5 0 I A λ λ λ λ λ λ λ − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − = −− − − = + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ Now, let’s take the determinant of this matrix and get the characteristic polynomial for A. We’ll use the “trick” that we reviewed in the previous section to take the determinant. You could also use cofactors if you prefer that method. The result will be the same. ( ) ( )( ) ( ) 3 2 4 0 1 4 0 det 1 6 2 1 6 5 0 5 0 4 6 5 6 2 29 30 I A λ λ λ λ λ λ λ λ λ λ λ λ λ − − − − = + + − − = − + − + = + − − Next, set this equal to zero. 3 2 2 29 30 0 λ λ λ + − − = Now, most of us aren’t that great at find the roots of a cubic polynomial. Luckily there is a way to at least get us started. It won’t always work, but if it does it can greatly reduce the amount of work that we need to do. Linear Algebra © 2007 Paul Dawkins 319 Suppose we’re trying to find the roots of an equation of the form, 1 1 1 0 0 n n n c c c λ λ λ − − + + + + = " where the i c are all integers. If there are integer solutions to this (and there may NOT be) then we know that they must be divisors of 0 c . This won’t give us any integer solutions, but it will allow us to write down a list of possible integer solutions. The list will be all possible divisors of 0 c . In this case the list of possible integer solutions is all possible divisors of -30. 1, 2, 3, 5, 6, 10, 15, 30 ± ± ± ± ± ± ± ± Now, that may seem like a lot of solutions that we’ll need to check. However, it isn’t quite that bad. Start with the smaller possible solutions and plug them in until you find one (i.e. until the polynomial is zero for one of them) and then stop. In this case the smallest one in the list that works is -1. This means that ( ) 1 1 λ λ −− = + must be a factor in the characteristic polynomial. In other words, we can write the characteristic polynomial as, ( ) ( ) 3 2 2 29 30 1 q λ λ λ λ λ + − − = + where ( ) q λ is a quadratic polynomial. We find ( ) q λ by performing long division on the characteristic polynomial. Doing this in this case gives, ( )( ) 3 2 2 2 29 30 1 30 λ λ λ λ λ λ + − − = + + − At this point all we need to do is find the solutions to the quadratic and nicely enough for us that factors in this case. So, putting all this together gives, ( )( )( ) 1 2 3 1 6 5 0 1, 6, 5 λ λ λ λ λ λ + + − = ⇒ = − = − = So, this matrix has three simple eigenvalues. [Return to Problems] (b) 6 3 8 0 2 0 1 0 3 A − ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Here is I A λ − and the characteristic polynomial for this matrix. ( ) 3 2 6 3 8 0 2 0 det 16 20 1 0 3 I A I A λ λ λ λ λ λ λ λ − − ⎡ ⎤ ⎢ ⎥ − = + − = − − − ⎢ ⎥ ⎢ ⎥ − + ⎣ ⎦ Now, in this case the list of possible integer solutions to the characteristic polynomial are, 1, 2, 4, 5, 10, 20 ± ± ± ± ± ± Again, if we start with the smallest integers in the list we’ll find that -2 is the first integer Linear Algebra © 2007 Paul Dawkins 320 solution. Therefore, ( ) 2 2 λ λ −− = + must be a factor of the characteristic polynomial. Factoring this out of the characteristic polynomial gives, ( )( ) 3 2 2 16 20 2 3 10 λ λ λ λ λ λ − − − = + − − Finally, factoring the quadratic and setting equal to zero gives us, ( ) ( ) 2 1,2 3 2 5 0 2, 5 λ λ λ λ + − = ⇒ = − = So, we have one double eigenvalue ( 1,2 2 λ = −) and one simple eigenvalue ( 3 5 λ = ). [Return to Problems] (c) 0 1 1 1 0 1 1 1 0 A ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Here is I A λ − and the characteristic polynomial for this matrix. ( ) 3 1 1 1 1 det 3 2 1 1 I A I A λ λ λ λ λ λ λ − − ⎡ ⎤ ⎢ ⎥ − = − − − = − − ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ We have a very small list of possible integer solutions for this characteristic polynomial. 1, 2 ± ± The smallest integer that works in this case is -1 and we’ll leave it to you to verify that the complete factored form is characteristic polynomial is, ( ) ( ) 2 3 3 2 1 2 λ λ λ λ − − = + − and so we can see that we’ve got two eigenvalues 1,2 1 λ = −(a multiplicity 2 eigenvalue) and 3 2 λ = (a simple eigenvalue). [Return to Problems] (d) 4 0 1 0 3 0 1 0 2 A − ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Here is I A λ − and the characteristic polynomial for this matrix. ( ) 3 2 4 0 1 0 3 0 det 9 27 27 1 0 2 I A I A λ λ λ λ λ λ λ λ − ⎡ ⎤ ⎢ ⎥ − = − − = − + − ⎢ ⎥ ⎢ ⎥ − − ⎣ ⎦ Okay, in this case the list of possible integer solutions is, 1, 3, 9, 27 ± ± ± ± Linear Algebra © 2007 Paul Dawkins 321 The smallest integer that will work in this case is 3. We’ll leave it to you to verify that the factored form of the characteristic polynomial is, ( ) 3 3 2 9 27 27 3 λ λ λ λ − + − = − and so we can see that if we set this equal to zero and solve we will have one eigenvalue of multiplicity 3 (sometimes called a triple eigenvalue), 1,2,3 3 λ = [Return to Problems] As you can see the work for these get progressively more difficult as we increase the size of the matrix, for a general matrix larger than 3 3 × we’d need to use the method of cofactors to determine the characteristic polynomial. There is one case kind of matrix however that we can pick the eigenvalues right off the matrix itself without doing any work and the size won’t matter. Theorem 1 Suppose A is an n n × triangular matrix then the eigenvalues will be the diagonal entries, 11 a , 22 a , … , nn a . Proof : We’ll give the proof for an upper triangular matrix, and leave it to you to verify the proof for a lower triangular and a diagonal matrix. We’ll start with, 11 12 1 22 2 0 0 0 n n nn a a a a a A a ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ " " # # % # " Now, we can write down I A λ − , 11 12 1 22 2 0 0 0 n n nn a a a a a I A a λ λ λ λ − − − ⎡ ⎤ ⎢ ⎥ − − ⎢ ⎥ − = ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎣ ⎦ " " # # % # " Now, this is still an upper triangular matrix and we know that the determinant of a triangular matrix is just the product of its main diagonal entries. The characteristic polynomial is then, ( ) ( )( ) ( ) 11 22 det nn I A a a a λ λ λ λ − = − − − " Setting this equal to zero and solving gives the eigenvalues of, 1 11 2 22 n nn a a a λ λ λ = = = " Linear Algebra © 2007 Paul Dawkins 322 Example 4 Find the eigenvalues of the following matrix. 6 0 0 0 0 9 4 0 0 0 2 0 11 0 0 1 1 3 0 0 0 1 7 4 8 A ⎡ ⎤ ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ = − ⎢ ⎥ − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ Solution Since this is a lower triangular matrix we can use the previous theorem to write down the eigenvalues. It will simply be the main diagonal entries. The eigenvalues are, 1 2 3 4 5 6 4 11 0 8 λ λ λ λ λ = = − = = = We can now find the eigenvalues for a matrix. We next need to address the issue of finding their corresponding eigenvectors. Recall given an eigenvalue, λ , the eigenvector(s) of A that correspond to λ will be the vectors x such that, ( ) OR A I A λ λ = − = x x x 0 Also, recall that λ was chosen so that I A λ − was a singular matrix. This in turn guaranteed that we would have a non-zero solution to the equation above. Note that in doing this we don’t just guarantee a non-zero solution, but we also guarantee that we’ll have infinitely many solutions to the system. We have one quick definition that we need to take care of before we start working problems. Definition 4 The set of all solutions to ( ) I A λ − = x 0 is called the eigenspace of A corresponding to λ . Note that there will be one eigenspace of A for each distinct eigenvalue and so there will be anywhere from 1 to n eigenspaces for an n n × matrix depending upon the number of distinct eigenvalues that the matrix has. Also, notice that we’re really just finding the null space for a system and we’ve looked at that in several sections in the previous chapter. Let’s take a look at some eigenspaces for some of the matrices we found eigenvalues for above. Example 5 For each of the following matrices determine the eigenvectors corresponding to each eigenvalue and determine a basis for the eigenspace of the matrix corresponding to each eigenvalue. (a) 6 16 1 4 A ⎡ ⎤ = ⎢ ⎥ − − ⎣ ⎦ [Solution] (b) 7 1 4 3 A − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ [Solution] Solution We determined the eigenvalues for each of these in Example 2 above so refer to that example for the details in finding them. For each eigenvalue we will need to solve the system, ( ) I A λ − = x 0 Linear Algebra © 2007 Paul Dawkins 323 to determine the general form of the eigenvector. Once we have that we can use the general form of the eigenvector to find a basis for the eigenspace. (a) 6 16 1 4 A ⎡ ⎤ = ⎢ ⎥ − − ⎣ ⎦ We know that the eigenvalues for this matrix are 1 2 λ = − and 2 4 λ = . Let’s first find the eigenvector(s) and eigenspace for 1 2 λ = −. Referring to Example 2 for the formula for I A λ − and plugging 1 2 λ = − into this we can see that the system we need to solve is, 1 2 8 16 0 1 2 0 x x − − ⎡ ⎤ ⎡ ⎤ ⎡⎤ = ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎣ ⎦ ⎣⎦ ⎣ ⎦ We’ll leave it to you to verify that the solution to this system is, 1 2 2 x t x t = − = Therefore, the general eigenvector corresponding to 1 2 λ = − is of the form, 2 2 1 t t t − − ⎡ ⎤ ⎡ ⎤ = = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ x The eigenspaces is all vectors of this form and so we can see that a basis for the eigenspace corresponding to 1 2 λ = − is, 1 2 1 − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ v Now, let’s find the eigenvector(s) and eigenspace for 2 4 λ = . Plugging 2 4 λ = into the formula for I A λ − from Example 2 gives the following system we need to solve, 1 2 2 16 0 1 8 0 x x − − ⎡ ⎤ ⎡ ⎤ ⎡⎤ = ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎣ ⎦ ⎣⎦ ⎣ ⎦ The solution to this system is (you should verify this), 1 2 8 x t x t = − = The general eigenvector and a basis for the eigenspace corresponding to 2 4 λ = is then, 2 8 8 8 & 1 1 t t t − − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ = = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ x v Note that if we wanted our hands on specific eigenvalues for each eigenvector the basis vector for each eigenspace would work. So, if we do that we could use the following eigenvectors (and their corresponding eigenvalues) if we’d like. Linear Algebra © 2007 Paul Dawkins 324 1 1 2 2 2 8 2 , 4 1 1 λ λ − − ⎡ ⎤ ⎡ ⎤ = − = = = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ v v Note as well that these eigenvectors are linearly independent vectors. [Return to Problems] (b) 7 1 4 3 A − ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ From Example 2 we know that 1,2 5 λ = is a double eigenvalue and so there will be a single eigenspace to compute for this matrix. Using the formula for I A λ − from Example 2 and plugging 1,2 5 λ = into this gives the following system that we’ll need to solve for the eigenvector and eigenspace. 1 2 2 1 0 4 2 0 x x − ⎡ ⎤ ⎡ ⎤ ⎡⎤ = ⎢ ⎥ ⎢ ⎥ ⎢⎥ − ⎣ ⎦ ⎣⎦ ⎣ ⎦ The solution to this system is, 1 2 1 2 x t x t = = The general eigenvector and a basis for the eigenspace corresponding 1,2 5 λ = is then, 1 1 1 2 2 2 1 & 1 1 t t t ⎡ ⎤ ⎡⎤ ⎡⎤ = = = ⎢ ⎥ ⎢⎥ ⎢⎥ ⎣ ⎦ ⎣⎦ ⎣⎦ x v In this case we get only a single eigenvector and so a good eigenvalue/eigenvector pair is, 1 2 1,2 1 5 1 λ ⎡⎤ = = ⎢⎥ ⎣⎦ v [Return to Problems] We didn’t look at the second matrix from Example 2 in the previous example. You should try and determine the eigenspace(s) for that matrix. The work will follow what we did in the first part of the previous example since there are two simple eigenvalues. Now, let’s determine the eigenspaces for the matrices in Example 3 Example 6 Determine the eigenvectors corresponding to each eigenvalue and a basis for the eigenspace corresponding to each eigenvalue for each of the matrices from Example 3 above. Solution The work finding the eigenvalues for each of these is shown in Example 3 above. Also, we’ll be doing this with less detail than those in the previous example. In each part we’ll use the formula for I A λ − found in Example 3 and plug in each eigenvalue to find the system that we need to solve for the eigenvector(s) and eigenspace. Linear Algebra © 2007 Paul Dawkins 325 (a) 4 0 1 1 6 2 5 0 0 A ⎡ ⎤ ⎢ ⎥ = − − − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ The eigenvalues for this matrix are 1 1 λ = −, 2 6 λ = − and 3 5 λ = so we’ll have three eigenspaces to find. Starting with 1 1 λ = − we’ll need to find the solution to the following system, 1 2 1 2 3 3 5 0 1 0 1 9 1 5 2 0 5 25 5 0 1 0 x x x t x t x t x − − ⎡ ⎤⎡ ⎤ ⎡⎤ ⎢ ⎥⎢ ⎥ ⎢⎥ = ⇒ = − = − = ⎢ ⎥⎢ ⎥ ⎢⎥ ⎢ ⎥⎢ ⎥ ⎢⎥ − − ⎣ ⎦⎣ ⎦ ⎣⎦ The general eigenvector and a basis for the eigenspace corresponding to 1 1 λ = − is then, 1 1 1 5 5 5 9 9 9 1 25 25 25 & 1 1 t t t t − − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = − = − = − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ x v Now, let’s take a look at 2 6 λ = −. Here is the system we need to solve, 1 2 1 2 3 3 10 0 1 0 1 0 2 0 0 0 5 0 6 0 x x x x t x x − − ⎡ ⎤⎡ ⎤ ⎡⎤ ⎢ ⎥⎢ ⎥ ⎢⎥ = ⇒ = = = ⎢ ⎥⎢ ⎥ ⎢⎥ ⎢ ⎥⎢ ⎥ ⎢⎥ − − ⎣ ⎦⎣ ⎦ ⎣⎦ Here is the general eigenvector and a basis for the eigenspace corresponding to 2 6 λ = −. 2 0 0 0 1 & 1 0 0 0 t t ⎡⎤ ⎡⎤ ⎡⎤ ⎢⎥ ⎢⎥ ⎢⎥ = = = ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎣⎦ ⎣⎦ ⎣⎦ x v Finally, here is the system for 3 5 λ = . 1 2 1 2 3 3 1 0 1 0 3 1 11 2 0 11 5 0 5 0 x x x t x t x t x − ⎡ ⎤⎡ ⎤ ⎡⎤ ⎢ ⎥⎢ ⎥ ⎢⎥ = ⇒ = = − = ⎢ ⎥⎢ ⎥ ⎢⎥ ⎢ ⎥⎢ ⎥ ⎢⎥ − ⎣ ⎦⎣ ⎦ ⎣⎦ The general eigenvector and a basis for the eigenspace corresponding to 3 5 λ = is then, 3 3 3 3 11 11 11 1 1 & 1 1 t t t t ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = − = − = − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ x v Linear Algebra © 2007 Paul Dawkins 326 Now, we as with the previous example, let’s write down a specific set of eigenvalue/eigenvector pairs for this matrix just in case we happen to need them for some reason. We can get specific eigenvectors using the basis vectors for the eigenspace as we did in the previous example. 1 5 9 3 1 1 2 2 3 3 25 11 0 1 1 , 6 1 , 5 1 0 1 λ λ λ − ⎡ ⎤ ⎡⎤ ⎡ ⎤ ⎢ ⎥ ⎢⎥ ⎢ ⎥ = − = − = − = = = − ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎣⎦ ⎣ ⎦ ⎣ ⎦ v v v You might want to verify that these three vectors are linearly independent vectors. (b) 6 3 8 0 2 0 1 0 3 A − ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ The eigenvalues for this matrix are 1,2 2 λ = − and 3 5 λ = so it looks like we’ll have two eigenspaces to find for this matrix. We’ll start with 1,2 2 λ = −. Here is the system that we need to solve and it’s solution. 1 2 1 2 3 3 8 3 8 0 0 0 0 0 0 1 0 1 0 x x x t x x t x − − ⎡ ⎤⎡ ⎤ ⎡⎤ ⎢ ⎥⎢ ⎥ ⎢⎥ = ⇒ = = = ⎢ ⎥⎢ ⎥ ⎢⎥ ⎢ ⎥⎢ ⎥ ⎢⎥ − ⎣ ⎦⎣ ⎦ ⎣⎦ The general eigenvector and a basis for the eigenspace corresponding 1,2 2 λ = − is then, 1 1 1 0 0 & 0 1 1 t t t ⎡⎤ ⎡⎤ ⎡⎤ ⎢⎥ ⎢⎥ ⎢⎥ = = = ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎣⎦ ⎣⎦ ⎣⎦ x v Note that even though we have a double eigenvalue we get a single basis vector here. Next, the system for 3 5 λ = that we need to solve and it’s solution is, 1 2 1 2 3 3 1 3 8 0 0 7 0 0 8 0 1 0 8 0 x x x t x x t x − − ⎡ ⎤⎡ ⎤ ⎡⎤ ⎢ ⎥⎢ ⎥ ⎢⎥ = ⇒ = = = ⎢ ⎥⎢ ⎥ ⎢⎥ ⎢ ⎥⎢ ⎥ ⎢⎥ − ⎣ ⎦⎣ ⎦ ⎣⎦ The general eigenvector and a basis for the eigenspace corresponding to 3 5 λ = is, 2 8 8 8 0 0 & 0 1 1 t t t ⎡ ⎤ ⎡⎤ ⎡⎤ ⎢ ⎥ ⎢⎥ ⎢⎥ = = = ⎢ ⎥ ⎢⎥ ⎢⎥ ⎢ ⎥ ⎢⎥ ⎢⎥ ⎣ ⎦ ⎣⎦ ⎣⎦ x v A set of eigenvalue/eigenvector pairs for this matrix is, Linear Algebra © 2007 Paul Dawkins 327 1,2 1 3 2 1 8 2 0 , 5 0 1 1 λ λ ⎡⎤ ⎡⎤ ⎢⎥ ⎢⎥ = − = = = ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎣⎦ ⎣⎦ v v Unlike the previous part we only have two eigenvectors here even though we have three eigenvalues (if you include repeats anyway). (c) 0 1 1 1 0 1 1 1 0 A ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ As with the previous part we’ve got two eigenvalues, 1,2 1 λ = − and 3 2 λ = and so we’ll again have two eigenspaces to find here. We’ll start with 1,2 1 λ = −. Here is the system we need to solve, 1 2 1 2 3 3 1 1 1 0 1 1 1 0 1 1 1 0 x x x s t x s x t x − − − ⎡ ⎤⎡ ⎤ ⎡⎤ ⎢ ⎥⎢ ⎥ ⎢⎥ − − − = ⇒ = −− = = ⎢ ⎥⎢ ⎥ ⎢⎥ ⎢ ⎥⎢ ⎥ ⎢⎥ − − − ⎣ ⎦⎣ ⎦ ⎣⎦ The general eigenvector corresponding to 1,2 1 λ = − is then, 1 1 0 1 1 0 s t s t s t −− − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = = + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ x Now, the eigenspace is spanned by the two vectors above and since they are linearly independent we can see that a basis for the eigenspace corresponding to 1,2 1 λ = − is, 1 2 1 1 0 1 1 0 − − ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ v v Here is the system for 3 2 λ = that we need to solve, 1 2 1 2 3 3 2 1 1 0 1 2 1 0 1 1 2 0 x x x t x t x t x − − ⎡ ⎤⎡ ⎤ ⎡⎤ ⎢ ⎥⎢ ⎥ ⎢⎥ − − = ⇒ = = = ⎢ ⎥⎢ ⎥ ⎢⎥ ⎢ ⎥⎢ ⎥ ⎢⎥ − − ⎣ ⎦⎣ ⎦ ⎣⎦ The general eigenvector and a basis for the eigenspace corresponding to 3 2 λ = is, 3 1 1 1 & 1 1 1 t t t t ⎡⎤ ⎡⎤ ⎡⎤ ⎢⎥ ⎢⎥ ⎢⎥ = = = ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎣⎦ ⎣⎦ ⎣⎦ x v Linear Algebra © 2007 Paul Dawkins 328 Okay, in this case if we want to write down a set of eigenvalue/eigenvector pairs we’ve got a slightly different situation that we’ve seen to this point. In the previous example we had an eigenvalue of multiplicity two but only got a single eigenvector for it. In this case because the eigenspace for our multiplicity two eigenvalue has a dimension of two we can use each basis vector as a separate eigenvector and so the eigenvalue/eigenvector pairs this time are, 1 1 2 2 3 3 1 1 1 1 0 , 1 1 , 2 1 1 0 1 λ λ λ − − ⎡ ⎤ ⎡ ⎤ ⎡⎤ ⎢ ⎥ ⎢ ⎥ ⎢⎥ = − = = − = = = ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎣ ⎦ ⎣ ⎦ ⎣⎦ v v v Note that we listed the eigenvalue of “-1” twice, once for each eigenvector. You should verify that these are all linearly independent. (d) 4 0 1 0 3 0 1 0 2 A − ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ In this case we had a single eigenvalue, 1,2,3 3 λ = so we’ll have a single eigenspace to find. Here is the system and its solution for this eigenvalue. 1 2 1 2 3 3 1 0 1 0 0 0 0 0 1 0 1 0 x x x t x s x t x − ⎡ ⎤⎡ ⎤ ⎡⎤ ⎢ ⎥⎢ ⎥ ⎢⎥ = ⇒ = = = ⎢ ⎥⎢ ⎥ ⎢⎥ ⎢ ⎥⎢ ⎥ ⎢⎥ − ⎣ ⎦⎣ ⎦ ⎣⎦ The general eigenvector corresponding to 1,2,3 3 λ = is then, 1 0 0 1 1 0 t s t s t ⎡⎤ ⎡⎤ ⎡⎤ ⎢⎥ ⎢⎥ ⎢⎥ = = + ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎣⎦ ⎣⎦ ⎣⎦ x As with the previous example we can see that the eigenspace is spanned by the two vectors above and since they are linearly independent we can see that a basis for the eigenspace corresponding to 1,2,3 3 λ = is, 1 2 1 0 0 1 1 0 ⎡⎤ ⎡⎤ ⎢⎥ ⎢⎥ = = ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎣⎦ ⎣⎦ v v Note that the two vectors above would also make a nice pair of eigenvectors for the single eigenvalue in this case. Okay, let’s go back and take a look at the eigenvector/eigenvalue pairs for a second. First, there are reasons for wanting these as we’ll see in the next section. On occasion we really do want specific eigenvectors and we generally want them to be linearly independent as well as we’ll see. Also, we saw two examples of eigenvalues of multiplicity 2. In one case we got a single eigenvector and in the other we got two linearly independent eigenvectors. This will always be the case, if λ is an eigenvalue of multiplicity k then there will be anywhere from 1 to k linearly independent eigenvectors depending upon the dimension of the eigenspace. Linear Algebra © 2007 Paul Dawkins 329 How there is one type of eigenvalue that we’ve been avoiding to this point and you may have noticed that already. Let’s take a look at the following example to see what we’ve been avoiding to this point. Example 7 Find all the eigenvalues of 6 5 8 6 A ⎡ ⎤ = ⎢ ⎥ − − ⎣ ⎦ Solution First we’ll need the matrix I A λ − and then we’ll use that to find the characteristic equation. 6 5 8 6 I A λ λ λ − − ⎡ ⎤ − = ⎢ ⎥ + ⎣ ⎦ ( ) ( )( ) 2 det 6 6 40 4 I A λ λ λ λ − = − + + = + From this we can see that the eigenvalues will be complex. In fact they will be, 1 2 2 2 i i λ λ = = − So we got a complex eigenvalue. We’ve not seen any of these to this point and this will be the only one we’ll look at here. We have avoided complex eigenvalues to this point on very important reason. Let’s recall just what an eigenvalue is. An eigenvalue is a scalar such that, ( ) 6 5 2 8 6 i ⎡ ⎤ = ⎢ ⎥ − − ⎣ ⎦ x x Can you see the problem? In order to talk about the scalar multiplication of the right we need to be in a complex vector space! Up to this point we’ve been working exclusively with real vector spaces and remember that in a real vector space the scalars are all real numbers. So, in order to work with complex eigenvalues we would need to be working in a complex vector space and we haven’t looked at those yet. So, since we haven’t looked at complex vector spaces yet we will be working only with matrices that have real eigenvalues. Note that this doesn’t mean that complex eigenvalues are not important. There are some very important applications to complex eigenvalues in various areas of math, engineering and the sciences. Of course, there are also times where we would ignore complex eigenvalues. We will leave this section out with a couple of nice theorems. Theorem 2 Suppose that λ is an eigenvalue of the matrix A with corresponding eigenvector x. Then if k is a positive integer k λ is an eigenvalue of the matrix k A with corresponding eigenvector x. Proof : The proof here is pretty simple. Linear Algebra © 2007 Paul Dawkins 330 ( ) ( ) ( ) ( ) ( ) ( ) 1 1 2 2 1 1 k k k k k k k k A A A A A A A A λ λ λ λ λ λ λ λ − − − − − − = = = = = = = x x x x x x x x # So, from this we can see that k λ is an eigenvalue of k A with corresponding eigenvector x. Theorem 3 Suppose A is an n n × matrix with eigenvalues 1 λ , 2 λ , … , n λ (possibly including repeats). Then, (a) The determinant of A is ( ) 1 2 det n A λ λ λ = " . (b) The trace of A is ( ) 1 2 tr n A λ λ λ = + + + " . We’ll prove part (a) here. The proof of part (b) involves some fairly messy algebra and so we won’t show it here. Proof of (a) : First, recall that eigenvalues of A are roots of the characteristic polynomial and hence we can write the characteristic polynomial as, ( ) ( )( ) ( ) 1 2 det n I A λ λ λ λ λ λ λ − = − − − " Now, plug in 0 λ = into this and we get, ( ) ( )( ) ( ) ( ) 1 2 1 2 det 1 n n n A λ λ λ λ λ λ − = − − − = − " " Finally, from Theorem 1 of the Properties of Determinant section we know that ( ) ( ) ( ) det 1 det n A A − = − So, plugging this in gives, ( ) 1 2 det n A λ λ λ = " Linear Algebra © 2007 Paul Dawkins 331 Diagonalization In this section we’re going to take a look at a special kind of matrix. We’ll start out with the following definition. Definition 1 Suppose that A is a square matrix and further suppose that there exists an invertible matrix P (of the same size as A of course) such that 1 P AP − is a diagonal matrix. In such a case we call A diagonalizable and say that P diagonalizes A. The following theorem will not only tell us when a matrix is diagonalizable, but the proof will tell us how to construct P when A is diagonalizable. Theorem 1 Suppose that A is an n n × matrix, then the following are equivalent. (a) A is diagonalizable. (b) A has n linearly independent eigenvectors. Proof : We’ll start by proving that ( ) ( ) a b ⇒ . So, assume that A is diagonalizable and so we know that an invertible matrix P exists so that 1 P AP − is a diagonal matrix. Now, let 1 p , 2 p , … , n p be the columns of P and suppose that D is the diagonal matrix we get from 1 P AP − , i.e. 1 D P AP − = . So, both P and D have the following forms, [ ] 1 2 1 2 0 0 0 0 0 0 n n P D λ λ λ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ p p p " " " # # % # " Also note that because P is an invertible matrix Theorem 8 from the Fundamental Subspaces section tells us that the columns of P will form a basis for n \ and hence must be linearly independent. Therefore, 1 p , 2 p , … , n p are a set of linearly independent columns vectors. Now, if we rewrite 1 D P AP − = we arrive at AP PD = or, [ ] [ ] 1 2 1 2 1 2 0 0 0 0 0 0 n n n A λ λ λ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ p p p p p p " " " " # # % # " Theorem 1 from the Matrix Arithmetic section tell us that the jth column of PD is P[jth column of D] and so the jth column of PD is nothing more than j j λ p . The same theorem tells us that jth column of AP is A[jth column of P] or j Ap . Now, since we have AP PD = the columns of both sides must be equal and so we must have, 1 1 1 2 2 2 n n n A A A λ λ λ = = = p p p p p p " Linear Algebra © 2007 Paul Dawkins 332 So, the diagonal entries from D, 1 λ , 2 λ , … , n λ are the eigenvalues of A and their corresponding eigenvectors are the columns of P, 1 p , 2 p , … , n p . Also as we noted above these are a set of linearly independent vectors which is what we were asked to prove. We now need to prove ( ) ( ) b a ⇒ and we’ve done most of the work for this in the previous part. Let’s start by assuming that the eigenvalues of A are 1 λ , 2 λ , … , n λ and that their associated eigenvectors are 1 p , 2 p , … , n p are linearly independent. Now, form a matrix P whose columns are 1 p , 2 p , … , n p . So, P has the form, [ ] 1 2 n P = p p p " Now, as we noted above the columns of AP are given by 1 2 n A A A p p p " However, using the fact that 1 p , 2 p , … , n p are the eigenvectors of A each of these columns can be written as, 1 1 1 2 2 2 n n n A A A λ λ λ = = = p p p p p p " Therefore, AP can be written as, [ ] 1 2 1 1 2 2 n n n AP A A A λ λ λ ⎡ ⎤ = = ⎣ ⎦ p p p p p p " " However, as we saw above, the matrix on the right can be written as PD where D the following diagonal matrix, 1 2 0 0 0 0 0 0 n D λ λ λ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ " " # # % # " So, we’ve managed to show that by defining P as above we have AP PD = . Finally, since the columns of P are n linearly independent vectors in n \ we know that they will form a basis for n \ and so by Theorem 8 from the Fundamental Subspaces section we know that P must be invertible and hence we have, 1 P AP D − = where D is an invertible matrix. Therefore A in diagonalizable. Let’s take a look at a couple of examples. Linear Algebra © 2007 Paul Dawkins 333 Example 1 Find a matrix P that will diagonalize each of the following matrices. (a) 4 0 1 1 6 2 5 0 0 A ⎡ ⎤ ⎢ ⎥ = − − − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ (b) 0 1 1 1 0 1 1 1 0 A ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Solution Okay, provided we can find 3 linearly independent eigenvectors for each of these we’ll have a pretty easy time of this since we know that that the columns of P will then be these three eigenvectors. Nicely enough for us, we did exactly this in the Example 6 of the previous section. At the time is probably seemed like there was no reason for writing down specific eigenvectors for each eigenvalue, but we did it for the problems in this section. So, in each case we’ll just go back to Example 6 and pull the eigenvectors from that example and form up P. (a) This was part (a) from Example 6 and so P is, 1 5 9 3 25 11 0 1 1 1 0 1 P − ⎡ ⎤ ⎢ ⎥ = − − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ We’ll leave it to you to verify that we get, 5 5 1 6 6 5 1 19 9 3 4 55 55 25 11 5 1 6 6 0 4 0 1 0 1 1 0 0 1 1 6 2 1 0 6 0 0 5 0 0 1 0 1 0 0 5 P AP − − − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = − − − − − − = − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ (b) This was part (c) from Example 6 so P is, 1 1 1 0 1 1 1 0 1 P − − ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Again, we’ll leave it to you to verify that, 1 1 2 3 3 3 1 1 2 1 3 3 3 1 1 1 3 3 3 0 1 1 1 1 1 1 0 0 1 0 1 0 1 1 0 1 0 1 1 0 1 0 1 0 0 2 P AP − − − − − − ⎡ ⎤⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ = − − = − ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦ Linear Algebra © 2007 Paul Dawkins 334 Example 2 Neither of the following matrices are diagonalizable. (a) 6 3 8 0 2 0 1 0 3 A − ⎡ ⎤ ⎢ ⎥ = − ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ (b) 4 0 1 0 3 0 1 0 2 A − ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Solution To see that neither of these are diagonalizable simply go back to Example 6 in the previous section to see that neither matrix has 3 linearly independent eigenvectors. In both cases we have only two linearly independent eigenvectors and so neither matrix is diagonalizable. For reference purposes. Part (a) of this example matches part(b) of Example 6 and part (b) of this example matches part (d) of Example 6. We didn’t actually do any of the work here for these problems so let’s summarize up how we need to go about finding P, provided it exists of course. We first find the eigenvalues for the matrix A and then for each eigenvalue find a basis for the eigenspace corresponding to that eigenvalue. The set of basis vectors will then serve as a set of linearly independent eigenvectors for the eigenvalue. If, after we’ve done this work for all the eigenvalues we have a set of n eigenvectors then A is diagonalizable and we use the eigenvectors to form P. If we don’t have a set of n eigenvectors then A is not diagonalizable. Actually, we should be careful here. In the above statement we assumed that if we had n eigenvectors that they would be linearly independent. We should always verify this of course. There is also one case were we can guarantee that we’ll have n linearly independent eigenvectors. Theorem 2 If 1 v , 2 v , … , k v are eigenvectors of A corresponding to the k distinct eigenvalues 1 λ , 2 λ , … , k λ then they form a linearly independent set of vectors. Proof : We’ll prove this by assuming that 1 v , 2 v , … , k v are in fact linearly dependent and from this we’ll get a contradiction and we we’ll see that 1 v , 2 v , … , k v must be linearly independent. So, assume that 1 v , 2 v , … , k v form a linearly dependent set. Now, since these are eigenvectors we know that they are all non-zero vectors. This means that the set { } 1 v must be a linearly independent set. So, we know that there must be a linearly independent subset of 1 v , 2 v , … , k v . So, let p be the largest integer such that { } 1 2 , , , p v v v … is a linearly independent set. Note that we must have 1 p k ≤ < because we are assuming that 1 v , 2 v , … , k v are linearly dependent. Therefore we know that if we take the next vector 1 p+ v and add it to our linearly independent vectors, { } 1 2 , , , p v v v … , the set { } 1 2 1 , , , , p p+ v v v v … will be a linearly dependent set. Linear Algebra © 2007 Paul Dawkins 335 So, if we know that { } 1 2 1 , , , , p p+ v v v v … is a linearly dependent set we know that there are scalars 1 2 1 , , , , p p c c c c + … , not all zero so that, 1 1 2 2 1 1 p p p p c c c c + + + + + + = v v v v 0 " (1) Now, multiply this by A to get, 1 1 2 2 1 1 p p p p c A c A c A c A + + + + + + = v v v v 0 " We know that the i v are eigenvectors of A corresponding to the eigenvalue i λ and so we know that 1 i i A λ = v v . Using this gives us, 1 1 1 2 2 2 1 1 1 p p p p p p c c c c λ λ λ λ + + + + + + + = v v v v 0 " (2) Next, multiply both sides of (1) by 1 p λ + to get, 1 1 1 2 1 2 1 1 1 1 p p p p p p p p c c c c λ λ λ λ + + + + + + + + + + = v v v v 0 " and subtract this from (2). Doing this gives, ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 1 1 1 2 2 1 2 1 1 1 1 1 1 1 1 1 2 2 1 2 1 p p p p p p p p p p p p p p p p c c c c c c c λ λ λ λ λ λ λ λ λ λ λ λ λ λ + + + + + + + + + + − + − + + − + − = − + − + + − = v v v v 0 v v v 0 " " Now, recall that we assumed that { } 1 2 , , , p v v v … were a linearly independent set and so the coefficients here must all be zero. Or, ( ) ( ) ( ) 1 1 1 2 2 1 1 0 0 0 p p p p p c c c λ λ λ λ λ λ + + + − = − = − = " However the eigenvalues are distinct and so the only way all these can be zero is if, 1 2 0 0 0 p c c c = = = " Plugging these values into (1) gives us 1 1 p p c + + = v 0 but, 1 p+ v is an eigenvector and hence is not the zero vector and so we must have 1 p c + = 0 So, what have we shown to this point? We’ll we’ve just seen that the only possible solution to 1 1 2 2 1 1 p p p p c c c c + + + + + + = v v v v 0 " is 1 2 1 0 0 0 0 p p c c c c + = = = = " This however would mean that the set { } 1 2 1 , , , , p p+ v v v v … is linearly independent and we assumed that at least some of the scalars were not zero. Therefore, this contradicts the fact that Linear Algebra © 2007 Paul Dawkins 336 we assumed that this set was linearly dependent. Therefore our original assumption that 1 v , 2 v , … , k v form a linearly dependent set must be wrong. We can then see that 1 v , 2 v , … , k v form a linearly independent set. We can use this theorem to quickly identify some diagonalizable matrices. Theorem 3 Suppose that A is an n n × matrix and that A has n distinct eigenvalues, then A is diagonalizable. Proof : By Theorem 2 we know that the eigenvectors corresponding to each of the eigenvectors are a linearly independent set and then by Theorem 1 above we know that A will be diagonalizable. We’ll close this section out with a nice theorem about powers diagonalizable matrices and the inverse of an invertible diagonalizable matrix. Theorem 4 Suppose that A is a diagonalizable matrix and that 1 P AP D − = then, (a) If k is any positive integer we have, 1 k k A PD P− = (b) If all the diagonal entries of D are non-zero then A is invertible and, 1 1 1 A PD P − − − = Proof : (a) We’ll give the proof for 2 k = and leave it to you to generalize the proof for larger values of k. Let’s start with the following. ( ) ( )( ) ( ) ( ) 2 2 1 1 1 1 1 1 1 2 D P AP P AP P AP P A PP AP P A I AP P A P − − − − − − − = = = = = So, we can see that, 2 1 2 D P A P − = We can finish this off by multiply the left of this equation by P and the right by 1 P− to arrive at, 2 2 1 A PD P− = (b) First, we know that if the main diagonal entries of a diagonal matrix are non-zero then the diagonal matrix is invertible. Now, all that we need to show that, ( ) 1 1 A PD P I − − = This is easy enough to do. All we need to do is plug in the fact that from part (a), using 1 k = , we have, 1 A PDP− = So, let’s do the following. ( ) ( ) ( ) ( ) 1 1 1 1 1 1 1 1 1 1 1 A PD P PDP PD P PD P P D P P DD P PP I − − − − − − − − − − − = = = = = So, we’re done. Linear Algebra © 2007 Paul Dawkins 337
188803
https://www.neurology.org/doi/10.1212/NXG.0000000000000434
Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy revisited | Neurology Genetics Skip to main content Skip to main contentAAN.com AAN Publications Author Center About the Journals Press Releases 0 Cart Search Sign inSUBSCRIBE Quick Search anywhere Enter search term Quick search in Citations Journal Year Volume Issue Page Searching: Anywhere AnywhereCitation Advanced SearchSearch Trending Terms: Cognitive Disorders/Dementia Cerebrovascular Disease/Stroke Multiple Sclerosis Health Disparities navigate the sidebar menu Sign inSUBSCRIBE Quick Search anywhere Enter search term Journals Neurology® Neurology® Clinical Practice Neurology® Education Neurology® Genetics Articles Latest Research Latest Non-Research Current Issue Past Issues Editorial Board Masthead Pictorial Directory All Neurology Journals Editorial Boards Neurology Podcast Submit a manuscript Author Center E-alerts Neurology® Neuroimmunology & Neuroinflammation Neurology® Open Access Research Articles All Research Articles Null Hypothesis Non-Research Articles All Non-Research Articles Research Methods in Neurology Open Review Resident & Fellow Neurology® Podcast Letters to the Editor Practice Current Continuing Medical Education (CME) Topic Collections Multimedia Blogs Infographics Neurology® Video Journal Club Video Summaries About About the Journals Advertise Contact Us Editorial Boards Ethics Policies For Authors and Reviewers Author Center Information for Reviewers Submit Manuscript Sign Up for E-Alerts Press Releases Terms of Service Privacy Policy A peer-reviewed clinical and translational neurology open access journal Articles Latest Research Latest Non-Research Current Issue Past Issues Editorial Board Masthead Pictorial Directory All Neurology Journals Editorial Boards Neurology Podcast Submit a manuscript Author Center E-alerts Reference #1 Views and Reviews May 11, 2020 Open Access Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy revisited Genotype-phenotype correlations of all published cases Georgia Xiromerisiou, MD, PhD, Chrysoula Marogianni, MD, MSc, Katerina Dadouli, MSc, Christina Zompola, MD, Despoina Georgouli, MD, MSc, Antonios Provatas, MD, PhD, Aikaterini Theodorou, MD, … Show All …, Paschalis Zervas, MD, Christina Nikolaidou, MD, Stergios Stergiou, MD, Panagiotis Ntellas, MD, Maria Sokratous, MD, MSc, Pantelis Stathis, MD, PhD, Georgios P.Paraskevas, MD, PhD, Anastasios Bonakis, MD, PhD, Konstantinos Voumvourakis, MD, PhD, Christos Hadjichristodoulou, MD, PhD, Georgios M.Hadjigeorgiou, MD, PhD and Georgios Tsivgoulis, MD, PhDShow FewerAuthors Info & Affiliations June 2020 issue 6 (3) Letters to the Editor Track CitationsAdd to favorites full text Contents Abstract Methods Results Discussion Glossary Study Funding Disclosure Publication History Appendix Authors References Information & Authors Metrics & Citations View Options References Figures Tables Media Share Abstract Objective The aim of this study was to evaluate the correlation between the various NOTCH3 mutations and their clinical and genetic profile, along with the presentation of a novel mutation in a patient. Methods Here, we describe the phenotype of a patient with cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL) harboring a novel mutation. We also performed an extensive literature research for NOTCH3 mutations published since the identification of the gene and performed a systematic review of all published cases with NOTCH3 mutations. We evaluated the mutation pathogenicity in a great number of patients with detailed clinical and genetic evaluation and investigated the possible phenotype-genotype correlations. Results Our patient harbored a novel mutation in the NOTCH3 gene, the c.3084 G > C, corresponding to the aminoacidic substitution p.Trp1028Cys, presenting with seizures as the first neurologic manifestation. We managed to find a correlation between the pathogenicity of mutations, severity of the phenotype, and age at onset of CADASIL. Significant differences were also identified between men and women regarding the phenotype severity. Conclusions The collection and analysis of these scarce data published since the identification of NOTCH3 qualitatively by means of a systematic review and quantitatively regarding genetic profile and pathogenicity scores, highlight the significance of the ongoing trend of investigating phenotypic genotypic correlations. Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL) is the most common heritable cause of stroke in adults younger than 65 years old.1 Although the description of the first case was made around 1955,2 the official characterization of the disorder was defined in 1993 after the discovery of the responsible gene, NOTCH3 on chromosome 19.3 Although the clinical manifestations of the disease are directly linked to the brain lesions, there is a systemic arteriopathy that affects the skin, the spleen, the liver, the kidneys, and the aorta apart from the brain.4 CADASIL consists of 4 basic clinical characteristics, which are migraine with aura, relapsing episodes of transient ischemic attacks (TIAs) and ischemic strokes, psychiatric symptoms as apathy and severe mood swings, and gradual cognitive impairment, which eventually lead to severe dementia.3 Another fundamental element of this disorder is the leukoencephalopathy and the subcortical infracts identified on the brain MRI, especially in the external capsules and anterior pole of temporal lobes.5 Pathologic findings have confirmed the extensive changes in the brain parenchyma, compatible with chronic small artery disease, mainly affecting the white matter in the periventricular areas and the region of basal ganglia. It is worth mentioning that the cortex, which appeared unaffected in neuroimaging, in a macroscopic examination displays extended neuronal apoptosis. Furthermore, in microscopic testing of the brain lesions, a specific arteriopathy has been revealed in which there is a thickening of the arterial wall of small penetrating cerebral and leptomeningeal arteries, leading to lumen stenosis.6 At the same time, with an electronic microscope in a specimen from a pathologic skin biopsy, deposits of granular osmiophilic material (GOM) located in the basement membrane of smooth muscle cells can be identified.7 CADASIL is the only disorder whose GOM has been identified. However, some reports on the sensitivity of detecting GOM in skin biopsy of patients with genetically proven CADASIL have been contradictory.8,9 CADASIL is an autosomal dominant inherited arteriopathy caused by mutations in the NOTCH3. The NOTCH3 gene encodes a single pass transmembrane protein, with receptor properties. This receptor is mainly expressed in the smooth muscle cells of blood vessels and pericytes.10 After ligand binding, the intracellular part translocates to the nucleus and activates transcription factors. NOTCH3 has 33 exons, but all the mutations found are located in exons 2–24, which are responsible for encoding 34 epidermal growth factor repeats, EGFR. Most of these mutations are missense mutations, although there are only a few in-frame deletions or splice-site mutations.11 The gene mutation analysis of NOTCH3 is the gold standard to diagnose this genetically inherited disease, and there are more than 230 different mutations located in 20 different exons reported in patients with CADASIL.12 In this review, we report a patient with CADASIL with a novel heterozygous NOTCH3 mutation and epileptic seizures as the very first manifestation of the disorder. In silico analysis revealed the pathogenicity of the mutation. However, we proceeded to the skin biopsy to detect deposits of GOM. In addition, we performed a systematic review of all published cases with NOTCH3 mutations. We evaluated the mutation pathogenicity in all previously reported cases to investigate possible phenotype-genotype correlations. Methods Case description: clinical findings A 62-year-old woman visited the outpatient stroke clinic of a tertiary care stroke center in Athens, Greece. The patient experienced episodes of loss of consciousness, some of them accompanied by tonic contraction of the upper arms and bite of her tongue, for the past 40 years. She has been receiving an antiepileptic drug (levetiracetam 1250 mg daily) for the past 2 years because of repeat episodes of tonic-clonic seizures. She had also experienced an ischemic lacunar stroke in the distribution of the left middle cerebral artery 4 months before her visit to the outpatient clinic of our department that resulted in right hemiparesis. The patient's husband and children reported changes in her personality and cognitive decline, gradually worsening over the past 5 years. Her family history revealed a brother with epilepsy and a father with posttraumatic epilepsy who died at the age of 60 years old. The patient had no history of hypertension or diabetes or other known cardiovascular risk factors. She was receiving escitalopram (20 mg) for depression, levetiracetam (1250 mg) for seizures, clopidogrel (75 mg), and atorvastatin (20 mg) for secondary stroke prevention. The patient was admitted in our department for further diagnostic workup. We repeated brain MRI that revealed extensive bilateral white matter lesions located mainly in the frontal lobes, the anterior temporal lobe, the external capsules, the centrum semiovale, and the basal ganglia (figure 1, A–C). There was no evidence of acute infarction. Furthermore, multiple cerebral microbleeds (CMBs) located predominantly in the thalami were documented on susceptibility-weighted imaging . The patient underwent an extensive workup for stroke (including transesophageal echocardiography, neck, and brain MR angiography, full coagulation disorder panel, repeat 24-hour holter ECG recordings, molecular genetic screening for Fabry disease, immunologic screening for autoimmune disorders) that was unremarkable. We also found no evidence for demyelinating CNS disorder on CSF analysis (normal results with absent oligoclonal bands) and cervical as well as thoracic spinal cord MRI. Open in Viewer Figure 1 Brain MRI scan (A–C) of the patient showing extensive leukoencephalopathy, mainly in the frontal lobes, the anterior temporal lobe, the external capsules, and the basal ganglia in the FLAIR sequences (D–F) of the daughter showing multiple hyperintensities in the FLAIR sequences Bedside neuropsychologic examination showed moderate cognitive decline (Mini-Mental State Examination 20/30) with defects, mainly in the attention and recall ability and a Frontal Assessment Battery (6 of 18) that revealed a serious disorder in conceptualization and mental flexibility. The patient's daughter complained of chronic tension-type headaches and was further investigated with brain MRI that revealed multiple hypertense subcortical white matter lesions (figure 1, D–F). The patient's son was asymptomatic and refused to undergo brain MRI despite our suggestions. Case description: genetic analysis Given the aforementioned extended workup, the relevant clinical course, the symptoms, and the compatible neuroimaging, the suspicion of CADASIL was raised, so we proceeded to NOTCH3 genetic analysis. Based on the mode of inheritance, the age at onset, and the phenotype, we suspected a genetic disorder. All study participants provided written informed consent for further genetic analysis. Our study was approved by the Ethics Committee of the University Hospital of Attikon. Genomic DNA of all available family members was extracted from the peripheral white blood cells according to the standard protocols. The NOTCH3 analysis led to the identification of a heterozygous mutation in the exon 19 NOTCH3 gene, c.3084 G > C, corresponding with the amino acid substitution p.Trp1028Cys. This is a novel mutation, compatible with CADASIL syndrome, leading to the gain of a cysteine residue in one of the 34 epidermal growth factor-like repeat (EGFr) domains of the protein encoded by NOTCH3 (1,312). This results in an uneven number of cysteine residues in the given EGFr domain, most likely modifying the tertiary structure of the protein. According to the American College of Medical Genetics and Genomics and the Association for Molecular Pathology 2015 guidelines, the pathogenicity potential of the p.Trp1028Cys variant is “pathogenic” based on the following criteria: (1) null variant (PVS1)—a missense mutation leading to a gain of a cysteine residue, (2) the absence of the variant from the controls in the Exome Sequencing Project, 1,000 Genomes Project, or Exome Aggregation Consortium (PM2), (3) in silico bioinformatics tools (Homologene, GEPR, Varsome) predicted that the variant causes a deleterious effect on the gene (PP3) because it occurs in a highly conserved area across multiple species accordingly (ncbi.nlm.nih.gov/homologene), and (4) the patient's phenotype is highly specific for the disease (PP4). Case description: histopathologic staining For immunohistochemistry, we used 10-lm tissue specimens. Endogenous peroxidase activity was eliminated by treatment with 0.5% periodic acid solution for 10 minutes. The sections were incubated in a blocking buffer (1% bovine serum albumin and 5% rabbit serum in phosphate-buffered saline). A monoclonal antibody against the extracellular domain of NOTCH3 diluted 1:100 in blocking buffer, served as the primary antibody. The sections were counterstained with Victoria blue. To observe the internal elastic lamina of vascular walls, we used Victoria blue–hematoxylin and eosin staining. No specific immunoreactive signal was detected in the vessels; therefore, the specimen was scored as negative. Review of published CADASIL data Systematic search We conducted a systematic review of the literature, investigating all known mutations of NOTCH3 and their phenotypic characteristics. We performed searches on PubMed, Cochrane Library, and MEDLINE (via PubMed) to identify all published studies before August 2019. A combination of the terms“CADASIL,” “Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy,” “NOTCH3,” and “NOTCH3 MUTATIONS” were used. All studies presenting original data that reported the clinical, genetic, and radiologic characteristics of patients with CADASIL were included for further review. English language was the only filter used initially. References of selected articles and reviews were also searched for additional records. Data extracted from each study were title; author; year of publication; the exon, the mutation and the exact amino acid change, the total cases screened carrying the mutant allele and sociodemographic characteristics (origin, age, and sex), age at onset, family history, clinical features of the disease (migraine, stroke, psychiatric disorders, cognitive decline, acute encephalopathy, and atypical findings), and findings from diagnostic procedures (MRI findings and skin biopsy). Data collection and eligibility criteria Studies were selected when they met the following criteria: (1) diagnosis of CADASIL was confirmed by a mutation analysis, (2) clinical findings of CADASIL were described, (3) the identified mutation was described, (4) when the same patient population was presented in more than one publication, the one with the largest sample size or most recent publication date was used as the primary study for data extraction, (5) languages being limited to English, and (6) Full study. Evaluation of mutations For the evaluation of the pathogenicity of the published mutations, computational prediction analysis was used. A combined annotation-dependent depletion (CADD) algorithm was used for scoring the deleteriousness of single nucleotide variants and insertion/deletions on the function of NOTCH3. Prediction is based on empirical rules applied to the sequence, phylogenetic, and structural information characterizing the amino acid substitution.13,14 A qualitative characterization of the mutations was based on the score: the CADD score is above >30, was highly pathogenic, above >20 was pathogenic, between 15 and 20 was likely pathogenic, below <15 was likely benign, and finally below <10 was benign. The Human Splicing Finder tool37 was used to evaluate mutations that potentially affect splicing.15 Statistical analysis We used descriptive statistics to present demographic, clinical, and other characteristics of the patients overall. Data were checked for deviation from normal distribution (Shapiro-Wilk normality test). Categorical data were analyzed with the use of a χ 2 test; Student t test, Mann-Whitney U test, analysis of variance, Kruskal-Wallis test and were performed for continuous data as appropriate. Spearman correlation analyses were used to estimate the correlations between quantitative variables. A multivariate analysis was performed in the form of multiple linear regression and multinomial regression and multinomial (polytomous) logistic regression. The cutoff value of age at onset was calculated using the receiver operating characteristic curve. For all the analyses, a 5% significance level was set. The Bonferroni method was used to correct the significance level where it was necessary.16,17 The analysis was carried out with SPSS version 25.0. Results Literature review We screened a total of 695 articles yielded by our literature search; we excluded 476 that evaluated animal models, lacked clinical, and genetic data and that were duplicates. After full-text retrieval and review, we included the 224 remaining articles because they met the predefined eligibility criteria and we proceeded in the qualitative analysis (Supplementary 1, links.lww.com/NXG/A262). The selected studies consisted of noncontrolled case series and case reports (figure 2). Open in Viewer Figure 2 Flow chart depicting our literature search and study selection for the systematic review Demographics: clinical phenotype A total of 752 patients (50% men) were included in our systematic review. Most of the patients were Caucasians of European origin (45%) and Asians followed with 43%. Only a few patients were included originating from North and South America, Africa, and Australia/New Zealand. Most case reports and small case series report patients originated from Japan, China, and Italy. The frequencies of CADASIL mutations across various countries according to our reviewed cases are presented in a world map shown in figure 3. However, the largest case series with CADASIL that were excluded from our analysis because of the lack of detailed clinical and genetic information originated from the Netherlands, Germany, UK, and Japan. Open in Viewer Figure 3 A world map showing the frequencies of CADASIL mutations across various countries according to our reviewed cases Detailed clinical information was available for 752 patients. The overall mean age of the reported cases was 52 ± 14 years, whereas the mean age at onset was 43 ± 14 years. Apprimately 64% of the patients that have been included in the study had a positive family history, 10% had no family history, and no information was available for the remaining 25%. Migraine was reported in 23% of patients. Most of all reported cases presented with stroke (52%), followed by cognitive decline (46%) while psychiatric disorders had a prevalence of 24%. Psychiatric disorders included predominantly depressive symptoms, apart from a minority of cases with bipolar disorder (2 cases) and psychosis (1 case). Epileptic disorders were reported in 4% of the patients. There are certain clinical characteristics mentioned in the included studies, which are atypical or in some cases, could be explained from the recurrent stroke episodes that the patients suffered. For example, we have found sensory deficits of the upper arms and findings of polyneuropathy in 10 cases. Furthermore, several cases (approximately 7%) with intracerebral hemorrhage have been described. An interesting fact is the presence of movement disorders such as ataxia and parkinsonism in approximately 4% of our included patients. Symptoms such as dizziness, vertigo, gait disturbance, and dysarthria could probably be consequences of the recurrent strokes. At some point in the course of their disease, approximately 2% of patients presented a confusional state and/or encephalopathy (CADASIL coma) after severe headache that lasted several days.18,19 We divided the patients according to the presentation of one or a combination of 2 or all of the above major clinical characteristics [phenotype 1 = patients with migraine, phenotype 2 = patients with migraine and stroke or stroke only, and phenotype 3 = (1) patients with migraine and stroke and cognitive disorder and/or psychiatric disorder and (2) patients with stroke and cognitive decline and/or psychiatric disorder]. We noticed that most patients, 34%, presented with phenotype 3 followed by phenotype 2 (32%). Phenotype 1 was documented in 7% of the patients included in this systematic review. Neuroimaging and skin biopsy The predominant radiologic manifestations of CADASIL on the brain MRI include hyperintense located in the white matter of the anterior temporal poles, centrum semiovale, external capsule, basal ganglia, and pons. Most of the patients had radiologic signs of leukoencephalopathy in their brain MRI scans (72%), where the precise distribution of the white matter lesions was not mentioned in most of them. We did not find any information in 25% of the patients regarding the presence of white matter lesions. Furthermore, CMBs have been reported in 7% of cases and were absent in 28% of the brain MRI scans. However, in most cases (65%), the presence of microbleeds was not reported. Skin biopsy data, a sensitive diagnostic tool in the diagnosis of CADASIL, were not available for more than half of the reported cases. Approximately 39% of patients had been tested with a skin biopsy. Approximately 14% of the patients had compatible findings with the diagnosis of CADASIL, whereas 24% were tested negative for the characteristic granular deposits in the basal lamina of blood vessels. In most studies, GOM was confirmed with electron microscopy, whereas in the rest of them with immunohistochemistry. Mutations Most mutations described were located in the exon 4 (29%), followed by exon 3 (14%), exon 11 (8%), and exon 19 (6%) (figure 4). Most of the mutations are predicted to result in a gain or loss of at least one of a cysteine residue; 14% of the mutations described did not involve a cysteine residue. Most mutations were missense mutations (74%), whereas a few splicing mutations and deletions were also reported. In total, 85% of these mutations were pathogenic and 12% were found to be highly pathogenic, whereas approximately 1% corresponded to benign mutations. Regarding quantitative pathogenicity scores, the mean score was 26 (max = 44, min = 0.6, SD = 4). The most common mutation was p.R169C in exon 4, followed by p.R182C in the same exon. Open in Viewer Figure 4 The distribution of the most common mutations across the NOTCH3 gene Genotype-phenotype correlation Initially, we performed several univariate analyses concerning the associations of all clinical characteristics and phenotypes with genotypic characteristics and pathogenicity scores. We found that the age at onset of CADASIL was significantly associated with the pathogenicity score of the mutation (rho = −0.165, p< 0.001). Highly pathogenic and pathogenic mutations appear to induce an earlier age at onset compared with likely pathogenic, benign, and likely benign variants. The higher the pathogenicity score, the earlier the onset of the disease (figure 5). Open in Viewer Figure 5 Univariate analysis showing the effect of pathogenicity score of the mutations on the age at onset of the disorder Multivariable analyses, taking into account several clinical and genetic factors, further indicate that the age of disease onset was independently (B = −0.4, 95% confidence interval [CI] = −0.7 to 0.1; p< 0.004) associated with pathogenicity score (table e-1, links.lww.com/NXG/A263). Regarding phenotype severity, women appear to have a different phenotype compared with men (p = 0.003), as presented in figure 6. Migraine is more common by 44% in women than men, while women usually present a less severe phenotype. Open in Viewer Figure 6 Bar chart showing the differences between men and women regarding phenotype severity Phenotype severity-sex, p value = 0.003. Phenotype severity is also highly correlated with age (p< 0.001) (table e-2, links.lww.com/NXG/A264). Migraine is prevalent at the age of 40 years, whereas a more severe phenotype that contains stroke or stroke and migraines is more prevalent in older patients (50–60 years). Cognitive decline or other psychiatric disorders manifest at the same age range (50–60 years) as stroke (figure 7). This indicates a cumulative effect of neurologic deficits with the progression of the disorder. More specifically, the mean age at onset in patients presenting with migraine, stroke and cognitive decline, or other psychiatric disorders is 40, 52 and 55 years, respectively. Open in Viewer Figure 7 Box plot depicting the distribution of phenotypes of CADASIL across age Phenotype severity-age, p< 0.001. Pathogenicity score appears to be solely affected by the location of the mutation (p< 0.001) and is not associated with phenotype severity (figure 8). All 3 phenotypes described harbor the same pathogenic mutations in qualitative and quantitative analysis. Open in Viewer Figure 8 Bar chart showing the distribution of mutations across the exons of the gene according to their pathogenicity score Εxon-pathogenicity score, p value< 0.001. The clinical manifestations and phenotype severity did not vary significantly between patients harboring cysteine and cysteine-sparing mutations. However, the age at onset was significantly earlier (mean difference = 7.8 years 95% CI = 2.2, 13.5) in patients with cysteine-sparing mutations (p = 0.009). The mean age at onset in patients with cysteine and cysteine-sparing mutations was 51 and 43 years old, respectively. Regarding the clinical profile and the phenotypic characteristics among Asian and Caucasian patients, there are certain significant differences. In particular, phenotypes 1 (patients with migraine) and 2 (patients with migraine and stroke or stroke only) of CADASIL were more prevalent in Caucasians than in Asians (figure 3). Regarding distribution of mutations, there are several differences between Asians and Caucasians because cysteine mutations are more prevalent in Asian populations (figure 4). Discussion The present review summarizes the clinical and neuroimaging findings of a novel mutation in exon 19 of the NOTCH3 gene, the c.3084 G > C, corresponding to the aminoacidic substitution p.Trp1028Cys. We have also documented that type and pathogenicity of the mutation are independently associated with the age of disease onset. Finally, phenotypic and genotypic characteristics differ significantly between Asians and Caucasians. Most mutations in the NOTCH3 gene are located within the large extracellular domain (ECD) of the transmembrane receptor. This domain consists of 34 epidermal growth factor (EGF)-like repeats that contain cysteines. The pathogenic mutations are associated with changes in the number of cysteines, leading to a misfolding of the receptor. This misfolding contributes significantly to the formation of oligomers and ECD aggregation, which is considered to be the pathogenic mechanism of the disease.20,21 A few mutations have been found that do not affect the number of cysteines. The interesting fact is that these mutations result in an uneven number of cysteine residues. The pathogenic role of these mutations is controversial.22 Several of these atypical mutations appear to be associated with conformational changes in the protein similar to the changes observed for typical cysteine involving mutations. However, these mutations seem to cause a similar phenotype without any significant differences regarding the age at onset or phenotype severity.21 Our findings clearly demonstrate this concentration of mutations in the ECD domain, especially in exon 4 and 3.23 Furthermore, they highlight the significant association of the location of the mutations with pathogenicity score and phenotype severity. We have not found any differences in the age at onset and the clinical phenotype overall regarding the cysteine and cysteine-sparing mutations. These results are consistent with most previous studies in Caucasian or Asian populations. However, there are small regional studies in Taiwan and Korea that presented different results, indicating that the most common mutations were in exon 11 and 18. These differences may simply reflect a founder effect.24,25 Our study investigated several possible associations between the genetic profile and clinical manifestations in all previously published cases within the parameters stated earlier. Several studies have thoroughly described the phenotypic manifestations and the natural history of the disorder, evaluating the frequency and age distribution of several characteristics.26 There are also other studies, with a limited number of patients, investigating possible phenotypic-genotypic correlation by comparing patients carrying mutations involving cysteine residues with patients with cysteine-sparing mutations without showing any significant differences.27 It has also been found that genetic variants located in the EGFr domain 7–34 are identified in the general population, whereas in patients with CADASIL the variants are predominately located in the EGFR 1. This must be further investigated with wider population studies to elucidate the role of many variants in small vessel disease phenotype. Skin biopsy is considered an important diagnostic tool for CADASIL diagnosis, highly specific but with variable sensitivity. Many technical factors, related to the actual biopsy or the severity of the disorder, may be the main reasons for the discrepancy across studies regarding the sensitivity of the method. According to our study, it seems that the implementation of this tool is not widely recognized. A small percentage of studies reported electron microscopy studies on skin biopsies or immunohistochemistry analysis using a NOTCH3 monoclonal antibody.28 The studies that did not show the typical depositions presented similar phenotype with the most of all the other described cases and involved patients with typical and cysteine sparing mutations as well. In most centers investigating and examining patients with CADASIL phenotype, it seems that skin biopsy is reserved for patients with unclassified variants. Genetic testing is the core diagnostic tool for the disorder. Our described case failed to manifest these characteristic findings with immunochemistry. Electron microscopy studies are more sensitive than immunochemistry.29 Technical reasons, the location of the biopsy, and the severity of the disorder could be possible explanations for the lack of typical findings.30 Furthermore, we noticed in our study that leukoencephalopathy in MRI scans is reported in most patients with genetically confirmed CADASIL and CMBs in a minority of them. However, a detailed description of MRI findings is not universally followed and so certain clinicoradiological associations cannot be drawn. There are several other radiologic focused studies showing that the MRI lesion load and pattern can vary quite significantly across patients. A consistent finding is the presence of symmetrical white matter hyperintensities on T2-weighted and fuid-attenuated inversion recovery images.31 Anterior temporal pole changes have been associated with high sensitivity and specificity for the disorder. External capsule changes also have a high sensitivity but low specificity. The occurrence of CMBs is disproportionate across published cases.32 Dilated perivascular spaces and subcortical and cortical atrophy have also been detected.33 The only significant finding that was highlighted during our study, with the limited data available, was the fact that MRI CMBs seem to be an age-related phenomenon associated with the progression of the disorder. Regarding clinical presentation, migraine is often the earliest feature of the disease. According to the previous studies, it seems that migraine is the first clinical symptom in 41% of symptomatic patients and an isolated symptom in 12%.34,35 Migraine is also reported in approximately 55–75% of Caucasian cases, although it is less frequent in Asian populations. TIAs and stroke are reported in approximately 85% of symptomatic individuals.35 Several studies have shown that the total lacunar lesion load is strongly associated with the development of disability.4 Cognitive impairment in CADASIL involves information processing speed and executive functions mostly. It is also often associated with apathy and depression. Our study presents findings consistent with previous research. However, we managed to investigate the effect of several factors on phenotype severity according to the presence of several clinical symptoms. At some point, the results clearly indicated that phenotype severity is an age-related phenomenon so the disorder seems progressive with the accumulation of new lacunar infarcts that affect cognition. Pathogenicity score does not significantly affect the combination of symptoms that a patient presents, but it appears to be related to the age at onset. We cannot make assumptions based on the pathogenicity of the mutation for the severity of the disorder, so genetic advice should be limited to the presence of the disorder and not to the appearance of specific phenotypic characteristics. Another important clinical conclusion that we drew was the fact that phenotype severity is also affected by sex, with men manifesting more severe symptoms.36 Regarding cognitive decline, we should also mention that cognitive performance was either not measured with the same assessments across all studies or it was not performed at all.37 Therefore, the presence of impairment is possibly underestimated. Longitudinal studies are also needed to investigate changes related to disease progression and possible association with MRI changes. This will shed light to the pathogenic processes underlying these symptoms.38 Other atypical clinical characteristics have also been reported in patients with CADASIL.39 A very important manifestation seems to be acute encephalopathy or coma.40 This is a rather misleading manifestation that has been reproduced several times in the literature but is extremely rare. Acute encephalopathy has been reported in 2 cases as the initial manifestation of the disorder in our reviewed cases. However, an acute encephalopathic presentation of the disease was previously described in 10% of patients who were part of a the British CADASIL prevalence study.41 All these patients had a history of migraine. The severe symptoms that they presented were episodes of migraine or epileptic seizures that lasted longer than usual, resulting in confusion and disorientation and were self-limited. Several small cohort studies have previously described specific phenotypic and genotypic characteristics of patients with CADASIL in detail (Supplementary data, links.lww.com/NXG/A262). They have also focused on specific findings, such as ophthalmologic manifestations, cognitive profile, and MRI features or the effect of other factors such as cardiovascular risk factors on phenotype.42,43 Some of these studies include a large number of patients >200. However, the clinical and genetic information for every patient separately is not publicly available. This fact prevented us from including these patients in our analysis and posed a bias concerning the frequency of mutations across several countries and more detailed genotypic phenotypic correlations. However, we strongly believe that the collection and analysis of all these scarce data published since the identification of NOTCH3 qualitatively by means of a systematic review and quantitatively in terms of genetic profile and pathogenicity scores, highlight the significance of the ongoing trend of investigating phenotypic genotypic correlations. Glossary CADASIL cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy CADD combined annotation-dependent depletion CI confidence interval CMB cerebral microbleed ECD extracellular domain EGFr epidermal growth factor-like repeat GOM granular osmiophilic material TIA transient ischemic attack Appendix Authors Open in ViewerOpen in Viewer References 1. Tournier-Lasserve E, Joutel A, Melki J, et al. Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy maps to chromosome 19q12. Nat Genet 1993;3:256–259. Go to Citation Crossref PubMed Google Scholar 2. Van Bogaert L. Encéphalopathie sous corticale progressive (Binswanger) à évolutionrapide chez des soeurs. Méd Hellen 1955;24:961–972. Go to Citation Google Scholar 3. Federico A, Bianchi S, Dotti MT. The spectrum of mutations for CADASIL diagnosis. Neurol Sci 2005;26:117–124. PubMed Google Scholar a [...] on chromosome 19. b [...] which eventually lead to severe dementia. 4. Tan RYY, Markus HS. CADASIL: migraine, encephalopathy, stroke and their inter relationships. PLoS One 2016;11:e0157613. Crossref PubMed Google Scholar a [...] and the aorta apart from the brain. b [...] with the development of disability. 5. O'Sullivan M, Jarosz JM, Martin RJ, et al. MRI hyperintensities of the temporal lobe and external capsule in patients with CADASIL. Neurology 2001;56:628–634. Go to Citation Crossref PubMed Google Scholar 6. Craggs L, Yamamoto Y, Ihara M, et al. White matter pathology and disconnection in the frontal lobe in cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL). Neuropathol Appl Neurobiol 2014;40:591–602. Go to Citation Crossref PubMed Google Scholar 7. Goebel HH, Meyermann R, Rosin R, Schlote W. Characteristic morphologic manifestation of CADASIL, cerebral autosomal-dominant arteriopathy with subcortical infarcts and leukoencephalopathy, in skeletal muscle and skin. Muscle Nerve 1997;20:625–627. Go to Citation Crossref PubMed Google Scholar 8. Markus HS, Martin RJ, Simpson MA, et al. Diagnostic strategies in CADASIL. Neurology 2002;59:1134–1138. Go to Citation Crossref PubMed Google Scholar 9. Tikka S, Mykkänen K, Ruchoux M-M, et al. Congruence between NOTCH3 mutations and GOM in 131 CADASIL patients. Brain 2009;132:933–939. Go to Citation Crossref PubMed Google Scholar 10. Joutel A, Corpechot C, Ducros A, et al. Notch3 mutations in CADASIL, a hereditary adult-onset condition causing stroke and dementia. Nature 1996;383:707–710. Go to Citation Crossref PubMed Google Scholar 11. Monet-Leprêtre M, Bardot B, Lemaire B, et al. Distinct phenotypic and functional features of CADASIL mutations in the Notch3 ligand binding domain. Brain A J Neurol 2009;132(pt 6):1601–1612. Go to Citation PubMed Google Scholar 12. Muiño E, Gallego-Fabrega C, Cullell N, et al. Systematic review of cysteine-sparing NOTCH3 missense mutations in patients with clinical suspicion of CADASIL. Int J Mol Sci 2017;18:1964. Go to Citation Crossref PubMed Google Scholar 13. Hack R, Rutten J, Lesnik Oberstein SA (1993). CADASIL. 2000 Mar 15 [Updated 2019 Mar 14]. In: Adam MP, Ardinger HH, Pagon RA, et al. (editors), GeneReviews® [Internet]. Seattle (WA): University of Washington, Seattle; 1993-2020. Available from: Go to Citation Google Scholar 14. Rentzsch P, Witten D, Cooper GM, Shendure J, Kircher M. CADD: Predicting the deleteriousness of variants throughout the human genome. Nucleic Acids Research 2019;47:D886–D894. Go to Citation PubMed Google Scholar 15. Human Splicing Finder—Version 3.1. (n.d.). Available at: umd.be/HSF/. Accessed October 13, 2019. Go to Citation Google Scholar 16. Berkeley's stats website, Statistics for Bioinformatics, Available at: stat.berkeley.edu/users/mgoldman. Accessed September 2019. Go to Citation Google Scholar 17. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B (Methodological) 1995;57:289–300. Go to Citation Google Scholar 18. Eswaradass VP, Ramasamy B, Kalidoss R, Gnanagurusamy G. Cadasil coma: unusual cause for acute encephalopathy. Ann Indian Acad Neurol 2015;18:483–484. Go to Citation PubMed Google Scholar 19. Ragno M, Pianese L, Morroni M, et al. “CADASIL coma” in an Italian homozygous CADASIL patient: comparison with clinical and MRI findings in age-matched heterozygous patients with the same G528C NOTCH3 mutation. Neurol Sci 2013;34:1947–1953. Go to Citation Crossref PubMed Google Scholar 20. Lackovic V, Bajcetic M, Lackovic M, et al. Skin and sural nerve biopsies: ultrastructural findings in the first genetically confirmed cases of CADASIL in Serbia. Ultrastructural Pathol 2012;36:325–335. Go to Citation Crossref PubMed Google Scholar 21. Santa Y, Uyama E, Chui DH, et al. Genetic, clinical and pathological studies of CADASIL in Japan: a partial contribution of Notch3 mutations and implications of smooth muscle cell degeneration for the pathogenesis. J Neurol Sci 2003;212:79–84. Crossref PubMed Google Scholar a [...] be the pathogenic mechanism of the disease. b [...] the age at onset or phenotype severity. 22. Wollenweber FA, Hanecker P, Bayer-Karpinska A, et al. Cysteine-sparing CADASIL mutations in NOTCH3 show proaggregatory properties in vitro. Stroke 2015;46:786–792. Go to Citation Crossref PubMed Google Scholar 23. Matsushima T, Conedera S, Tanaka R, et al. Genotype–phenotype correlations of cysteine replacement in CADASIL. Neurobiol Aging 2017;50:169.e7–169.e14. Go to Citation Crossref PubMed Google Scholar 24. Ueda A, Ueda M, Nagatoshi A, et al. Genotypic and phenotypic spectrum of CADASIL in Japan: the experience at a referral center in Kumamoto University from 1997 to 2014. J Neurol 2015;262:1828–1836. Go to Citation Crossref PubMed Google Scholar 25. Lee YC, Liu CS, Chang MH, et al. Population-specific spectrum of NOTCH3 mutations, MRI features and founder effect of CADASIL in Chinese. J Neurol 2009;256:249–255. Go to Citation Crossref PubMed Google Scholar 26. Rutten JW, Dauwerse HG, Gravesteijn G, et al. Archetypal NOTCH3 mutations frequent in public exome: implications for CADASIL. Ann Clin Translational Neurol 2016;3:844–853, Go to Citation Crossref PubMed Google Scholar 27. Liu X, Zuo Y, Sun W, et al. The genetic spectrum and the evaluation of CADASIL screening scale in Chinese patients with NOTCH3 mutations. J Neurol Sci 2015;354:63–69. Go to Citation Crossref PubMed Google Scholar 28. Dichgans M, Mayer M, Uttner I, et al. The phenotypic spectrum of CADASIL: clinical findings in 102 cases. Ann Neurol 1998;44:731–739. Go to Citation Crossref PubMed Google Scholar 29. Gustavsen WR, Reinholt FP, Schlosser A. Skin biopsy findings and results of neuropsychological testing in the first confirmed cases of CADASIL in Norway. Eur J Neurol 2006;13:359–362. Go to Citation PubMed Google Scholar 30. Morroni M, Marzioni D, Ragno M, et al. Role of electron microscopy in the diagnosis of cadasil syndrome: a study of 32 patients. PLoS One 2013;8:e65482. Go to Citation Crossref PubMed Google Scholar 31. Dziewulska D, Sulejczak D, Wężyk M. What factors determine phenotype of cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL)? Considerations in the context of a novel pathogenic R110C mutation in the NOTCH3 gene. Folia Neuropathologica 2017;55:295–300. Go to Citation PubMed Google Scholar 32. Shi Y, Li S, Li W, et al. MRI lesion load of cerebral small vessel disease and cognitive impairment in patients with CADASIL. Front Neurol 2018;9:862. Go to Citation Crossref PubMed Google Scholar 33. Puy L, De Guio F, Godin O, et al. Cerebral microbleeds and the risk of incident ischemic stroke in CADASIL (cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy). Stroke 2017;48:2699–2703. Go to Citation Crossref PubMed Google Scholar 34. Wang Z, Yuan Y, Zhang W, et al. NOTCH3 mutations and clinical features in 33 mainland Chinese families with CADASIL. J Neurol Neurosurg Psychiatry 2011;82:534–539. Go to Citation Crossref PubMed Google Scholar 35. Guey S, Mawet J, Hervé D, et al. Prevalence and characteristics of migraine in CADASIL. Cephalalgia 2016;36:1038–1047. Crossref PubMed Google Scholar a [...] patients and an isolated symptom in 12%. b [...] 85% of symptomatic individuals. 36. Burkett JG, Dougherty C. Recognizing CADASIL: a secondary cause of migraine with aura. Curr Pain Headache Rep 2017;21:21. Go to Citation PubMed Google Scholar 37. Gunda B, Hervé D, Godin O, et al. Effects of gender on the phenotype of CADASIL. Stroke 2012;43:137–141. Go to Citation Crossref PubMed Google Scholar 38. Yoon CW, Kim YE, Seo SW, et al. NOTCH3 variants in patients with subcortical vascular cognitive impairment: a comparison with typical CADASIL patients. Neurobiol Aging 2015;36:2443–2447. Go to Citation Google Scholar 39. Brookes RL, Hollocks MJ, Tan RY, Morris RG, Markus HS. Brief screening of vascular cognitive impairment in patients with cerebral autosomal-dominant arteriopathy with subcortical infarcts and leukoencephalopathy without dementia. Stroke 2016;47:2482–2487. Go to Citation Crossref PubMed Google Scholar 40. Spinicci G, Conti M, Cherchi MV, Mancosu C, Murru R, Carboni N Unusual clinical presentations in subjects carrying novel NOTCH3 gene mutations. J Stroke Cerebrovasc Dis 2013;22:539–544. Go to Citation PubMed Google Scholar 41. Feuerhake F, Volk B, Ostertag C, et al. Reversible coma with raised intracranial pressure: an unusual clinical manifestation of CADASIL. Acta Neuropathologica 2002;103:188–192. Go to Citation PubMed Google Scholar 42. Schon F, Martin RJ, Prevett M, Clough C, Enevoldson TP, Markus HS. “CADASIL coma”: an underdiagnosed acute encephalopathy. J Neurol Neurosurg Psychiatry 2003;74:249–252. Go to Citation PubMed Google Scholar 43. Cumurciuc R, Massin P, Pâques M, et al. Retinal abnormalities in CADASIL: a retrospective study of 18 patients. J Neurol Neurosurg Psychiatry 2004;75:1058–1060. Go to Citation PubMed Google Scholar Show all references Information & Authors Information Authors Information Published In Neurology® Genetics Volume 6 • Number 3 • June 2020 Copyright Copyright © 2020 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the American Academy of Neurology. This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND), which permits downloading and sharing the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Publication History Received: January 7, 2020 Accepted: April 2, 2020 Published online: May 11, 2020 Published in issue: June 2020 Permissions Request permissions for this article. Request permissions Disclosure All authors declare that they have no conflicted interests. Go to Neurology.org/NG for full disclosures. Study Funding No targeted funding. Authors Affiliations & Disclosures Expand All Georgia Xiromerisiou, MD, PhD From the Department of Neurology (G.X., C.M., D.G., A.P., M.S., G.M.H.), University Hospital of Larissa, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece; Second Department of Neurology (C.Z., A.T., P.Z., A.B., K.V., G.T.), “Attikon” University Hospital, School of Medicine, National and Kapodistrian University of Athens, 12462 Athens, Greece; Department of Neurology (G.M.H.), Medical School, University of Cyprus, Nicosia, Cyprus; Department of Hygiene and Epidemiology (K.D., C.H.), Faculty of Medicine, University of Thessaly, Larissa, Greece; Department of Medical Oncology (P.N.), University Hospital of Ioannina, Ioannina, Greece; Department of Neurology (P.S.), Mediterraneo Hospital, Glyfada, Athens, Greece; Histopathological Department (C.N., S.S.), Hippokration General Hospital Thessaloniki; and Department of Neurology (G.P.P.), School of Medicine, National and Kapodistrian University of Athens, Eginition Hospital, Athens, Greece. Disclosure Scientific Advisory Boards: 1. NONE Gifts: 1. NONE Funding for Travel or Speaker Honoraria: 1. NONE Editorial Boards: 1. NONE Patents: 1. NONE Publishing Royalties: 1. NONE Employment, Commercial Entity: 1. NONE Consultancies: 1. NONE Speakers' Bureaus: 1. NONE Other Activities: 1. NONE Clinical Procedures or Imaging Studies: 1. NONE Research Support, Commercial Entities: 1. NONE Research Support, Government Entities: 1. NONE Research Support, Academic Entities: 1. NONE Research Support, Foundations and Societies: 1. NONE Stock/stock Options/board of Directors Compensation: 1. NONE License Fee Payments, Technology or Inventions: 1. NONE Royalty Payments, Technology or Inventions: 1. NONE Stock/stock Options, Research Sponsor: 1. NONE Stock/stock Options, Medical Equipment & Materials: 1. NONE Legal Proceedings: 1. NONE View all articles by this author Chrysoula Marogianni, MD, MSc From the Department of Neurology (G.X., C.M., D.G., A.P., M.S., G.M.H.), University Hospital of Larissa, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece; Second Department of Neurology (C.Z., A.T., P.Z., A.B., K.V., G.T.), “Attikon” University Hospital, School of Medicine, National and Kapodistrian University of Athens, 12462 Athens, Greece; Department of Neurology (G.M.H.), Medical School, University of Cyprus, Nicosia, Cyprus; Department of Hygiene and Epidemiology (K.D., C.H.), Faculty of Medicine, University of Thessaly, Larissa, Greece; Department of Medical Oncology (P.N.), University Hospital of Ioannina, Ioannina, Greece; Department of Neurology (P.S.), Mediterraneo Hospital, Glyfada, Athens, Greece; Histopathological Department (C.N., S.S.), Hippokration General Hospital Thessaloniki; and Department of Neurology (G.P.P.), School of Medicine, National and Kapodistrian University of Athens, Eginition Hospital, Athens, Greece. Disclosure Scientific Advisory Boards: 1. NONE Gifts: 1. NONE Funding for Travel or Speaker Honoraria: 1. NONE Editorial Boards: 1. NONE Patents: 1. NONE Publishing Royalties: 1. NONE Employment, Commercial Entity: 1. NONE Consultancies: 1. NONE Speakers' Bureaus: 1. NONE Other Activities: 1. NONE Clinical Procedures or Imaging Studies: 1. NONE Research Support, Commercial Entities: 1. NONE Research Support, Government Entities: 1. NONE Research Support, Academic Entities: 1. NONE Research Support, Foundations and Societies: 1. NONE Stock/stock Options/board of Directors Compensation: 1. NONE License Fee Payments, Technology or Inventions: 1. NONE Royalty Payments, Technology or Inventions: 1. NONE Stock/stock Options, Research Sponsor: 1. NONE Stock/stock Options, Medical Equipment & Materials: 1. NONE Legal Proceedings: 1. NONE View all articles by this author Katerina Dadouli, MSc From the Department of Neurology (G.X., C.M., D.G., A.P., M.S., G.M.H.), University Hospital of Larissa, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece; Second Department of Neurology (C.Z., A.T., P.Z., A.B., K.V., G.T.), “Attikon” University Hospital, School of Medicine, National and Kapodistrian University of Athens, 12462 Athens, Greece; Department of Neurology (G.M.H.), Medical School, University of Cyprus, Nicosia, Cyprus; Department of Hygiene and Epidemiology (K.D., C.H.), Faculty of Medicine, University of Thessaly, Larissa, Greece; Department of Medical Oncology (P.N.), University Hospital of Ioannina, Ioannina, Greece; Department of Neurology (P.S.), Mediterraneo Hospital, Glyfada, Athens, Greece; Histopathological Department (C.N., S.S.), Hippokration General Hospital Thessaloniki; and Department of Neurology (G.P.P.), School of Medicine, National and Kapodistrian University of Athens, Eginition Hospital, Athens, Greece. Disclosure Scientific Advisory Boards: 1. NONE Gifts: 1. NONE Funding for Travel or Speaker Honoraria: 1. NONE Editorial Boards: 1. NONE Patents: 1. NONE Publishing Royalties: 1. NONE Employment, Commercial Entity: 1. NONE Consultancies: 1. NONE Speakers' Bureaus: 1. NONE Other Activities: 1. NONE Clinical Procedures or Imaging Studies: 1. NONE Research Support, Commercial Entities: 1. NONE Research Support, Government Entities: 1. NONE Research Support, Academic Entities: 1. NONE Research Support, Foundations and Societies: 1. NONE Stock/stock Options/board of Directors Compensation: 1. NONE License Fee Payments, Technology or Inventions: 1. NONE Royalty Payments, Technology or Inventions: 1. NONE Stock/stock Options, Research Sponsor: 1. NONE Stock/stock Options, Medical Equipment & Materials: 1. NONE Legal Proceedings: 1. NONE View all articles by this author Christina Zompola, MD From the Department of Neurology (G.X., C.M., D.G., A.P., M.S., G.M.H.), University Hospital of Larissa, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece; Second Department of Neurology (C.Z., A.T., P.Z., A.B., K.V., G.T.), “Attikon” University Hospital, School of Medicine, National and Kapodistrian University of Athens, 12462 Athens, Greece; Department of Neurology (G.M.H.), Medical School, University of Cyprus, Nicosia, Cyprus; Department of Hygiene and Epidemiology (K.D., C.H.), Faculty of Medicine, University of Thessaly, Larissa, Greece; Department of Medical Oncology (P.N.), University Hospital of Ioannina, Ioannina, Greece; Department of Neurology (P.S.), Mediterraneo Hospital, Glyfada, Athens, Greece; Histopathological Department (C.N., S.S.), Hippokration General Hospital Thessaloniki; and Department of Neurology (G.P.P.), School of Medicine, National and Kapodistrian University of Athens, Eginition Hospital, Athens, Greece. Disclosure Scientific Advisory Boards: 1. NONE Gifts: 1. NONE Funding for Travel or Speaker Honoraria: 1. NONE Editorial Boards: 1. NONE Patents: 1. NONE Publishing Royalties: 1. NONE Employment, Commercial Entity: 1. NONE Consultancies: 1. NONE Speakers' Bureaus: 1. NONE Other Activities: 1. NONE Clinical Procedures or Imaging Studies: 1. NONE Research Support, Commercial Entities: 1. NONE Research Support, Government Entities: 1. NONE Research Support, Academic Entities: 1. NONE Research Support, Foundations and Societies: 1. NONE Stock/stock Options/board of Directors Compensation: 1. NONE License Fee Payments, Technology or Inventions: 1. NONE Royalty Payments, Technology or Inventions: 1. NONE Stock/stock Options, Research Sponsor: 1. NONE Stock/stock Options, Medical Equipment & Materials: 1. NONE Legal Proceedings: 1. NONE View all articles by this author Despoina Georgouli, MD, MSc From the Department of Neurology (G.X., C.M., D.G., A.P., M.S., G.M.H.), University Hospital of Larissa, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece; Second Department of Neurology (C.Z., A.T., P.Z., A.B., K.V., G.T.), “Attikon” University Hospital, School of Medicine, National and Kapodistrian University of Athens, 12462 Athens, Greece; Department of Neurology (G.M.H.), Medical School, University of Cyprus, Nicosia, Cyprus; Department of Hygiene and Epidemiology (K.D., C.H.), Faculty of Medicine, University of Thessaly, Larissa, Greece; Department of Medical Oncology (P.N.), University Hospital of Ioannina, Ioannina, Greece; Department of Neurology (P.S.), Mediterraneo Hospital, Glyfada, Athens, Greece; Histopathological Department (C.N., S.S.), Hippokration General Hospital Thessaloniki; and Department of Neurology (G.P.P.), School of Medicine, National and Kapodistrian University of Athens, Eginition Hospital, Athens, Greece. Disclosure Scientific Advisory Boards: 1. NONE Gifts: 1. NONE Funding for Travel or Speaker Honoraria: 1. NONE Editorial Boards: 1. NONE Patents: 1. NONE Publishing Royalties: 1. NONE Employment, Commercial Entity: 1. NONE Consultancies: 1. NONE Speakers' Bureaus: 1. NONE Other Activities: 1. NONE Clinical Procedures or Imaging Studies: 1. NONE Research Support, Commercial Entities: 1. NONE Research Support, Government Entities: 1. NONE Research Support, Academic Entities: 1. NONE Research Support, Foundations and Societies: 1. NONE Stock/stock Options/board of Directors Compensation: 1. NONE License Fee Payments, Technology or Inventions: 1. NONE Royalty Payments, Technology or Inventions: 1. NONE Stock/stock Options, Research Sponsor: 1. NONE Stock/stock Options, Medical Equipment & Materials: 1. NONE Legal Proceedings: 1. NONE View all articles by this author Antonios Provatas, MD, PhD From the Department of Neurology (G.X., C.M., D.G., A.P., M.S., G.M.H.), University Hospital of Larissa, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece; Second Department of Neurology (C.Z., A.T., P.Z., A.B., K.V., G.T.), “Attikon” University Hospital, School of Medicine, National and Kapodistrian University of Athens, 12462 Athens, Greece; Department of Neurology (G.M.H.), Medical School, University of Cyprus, Nicosia, Cyprus; Department of Hygiene and Epidemiology (K.D., C.H.), Faculty of Medicine, University of Thessaly, Larissa, Greece; Department of Medical Oncology (P.N.), University Hospital of Ioannina, Ioannina, Greece; Department of Neurology (P.S.), Mediterraneo Hospital, Glyfada, Athens, Greece; Histopathological Department (C.N., S.S.), Hippokration General Hospital Thessaloniki; and Department of Neurology (G.P.P.), School of Medicine, National and Kapodistrian University of Athens, Eginition Hospital, Athens, Greece. Disclosure Scientific Advisory Boards: 1. NONE Gifts: 1. NONE Funding for Travel or Speaker Honoraria: 1. NONE Editorial Boards: 1. NONE Patents: 1. NONE Publishing Royalties: 1. NONE Employment, Commercial Entity: 1. NONE Consultancies: 1. NONE Speakers' Bureaus: 1. NONE Other Activities: 1. NONE Clinical Procedures or Imaging Studies: 1. NONE Research Support, Commercial Entities: 1. NONE Research Support, Government Entities: 1. NONE Research Support, Academic Entities: 1. NONE Research Support, Foundations and Societies: 1. NONE Stock/stock Options/board of Directors Compensation: 1. NONE License Fee Payments, Technology or Inventions: 1. NONE Royalty Payments, Technology or Inventions: 1. NONE Stock/stock Options, Research Sponsor: 1. NONE Stock/stock Options, Medical Equipment & Materials: 1. NONE Legal Proceedings: 1. NONE View all articles by this author Aikaterini Theodorou, MD From the Department of Neurology (G.X., C.M., D.G., A.P., M.S., G.M.H.), University Hospital of Larissa, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece; Second Department of Neurology (C.Z., A.T., P.Z., A.B., K.V., G.T.), “Attikon” University Hospital, School of Medicine, National and Kapodistrian University of Athens, 12462 Athens, Greece; Department of Neurology (G.M.H.), Medical School, University of Cyprus, Nicosia, Cyprus; Department of Hygiene and Epidemiology (K.D., C.H.), Faculty of Medicine, University of Thessaly, Larissa, Greece; Department of Medical Oncology (P.N.), University Hospital of Ioannina, Ioannina, Greece; Department of Neurology (P.S.), Mediterraneo Hospital, Glyfada, Athens, Greece; Histopathological Department (C.N., S.S.), Hippokration General Hospital Thessaloniki; and Department of Neurology (G.P.P.), School of Medicine, National and Kapodistrian University of Athens, Eginition Hospital, Athens, Greece. Disclosure Scientific Advisory Boards: 1. NONE Gifts: 1. NONE Funding for Travel or Speaker Honoraria: 1. NONE Editorial Boards: 1. NONE Patents: 1. NONE Publishing Royalties: 1. NONE Employment, Commercial Entity: 1. NONE Consultancies: 1. NONE Speakers' Bureaus: 1. NONE Other Activities: 1. NONE Clinical Procedures or Imaging Studies: 1. NONE Research Support, Commercial Entities: 1. NONE Research Support, Government Entities: 1. NONE Research Support, Academic Entities: 1. NONE Research Support, Foundations and Societies: 1. NONE Stock/stock Options/board of Directors Compensation: 1. NONE License Fee Payments, Technology or Inventions: 1. NONE Royalty Payments, Technology or Inventions: 1. NONE Stock/stock Options, Research Sponsor: 1. NONE Stock/stock Options, Medical Equipment & Materials: 1. NONE Legal Proceedings: 1. NONE View all articles by this author Paschalis Zervas, MD From the Department of Neurology (G.X., C.M., D.G., A.P., M.S., G.M.H.), University Hospital of Larissa, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece; Second Department of Neurology (C.Z., A.T., P.Z., A.B., K.V., G.T.), “Attikon” University Hospital, School of Medicine, National and Kapodistrian University of Athens, 12462 Athens, Greece; Department of Neurology (G.M.H.), Medical School, University of Cyprus, Nicosia, Cyprus; Department of Hygiene and Epidemiology (K.D., C.H.), Faculty of Medicine, University of Thessaly, Larissa, Greece; Department of Medical Oncology (P.N.), University Hospital of Ioannina, Ioannina, Greece; Department of Neurology (P.S.), Mediterraneo Hospital, Glyfada, Athens, Greece; Histopathological Department (C.N., S.S.), Hippokration General Hospital Thessaloniki; and Department of Neurology (G.P.P.), School of Medicine, National and Kapodistrian University of Athens, Eginition Hospital, Athens, Greece. Disclosure Scientific Advisory Boards: 1. NONE Gifts: 1. NONE Funding for Travel or Speaker Honoraria: 1. NONE Editorial Boards: 1. NONE Patents: 1. NONE Publishing Royalties: 1. NONE Employment, Commercial Entity: 1. NONE Consultancies: 1. NONE Speakers' Bureaus: 1. NONE Other Activities: 1. NONE Clinical Procedures or Imaging Studies: 1. NONE Research Support, Commercial Entities: 1. NONE Research Support, Government Entities: 1. NONE Research Support, Academic Entities: 1. NONE Research Support, Foundations and Societies: 1. NONE Stock/stock Options/board of Directors Compensation: 1. NONE License Fee Payments, Technology or Inventions: 1. NONE Royalty Payments, Technology or Inventions: 1. NONE Stock/stock Options, Research Sponsor: 1. NONE Stock/stock Options, Medical Equipment & Materials: 1. NONE Legal Proceedings: 1. NONE View all articles by this author Christina Nikolaidou, MD From the Department of Neurology (G.X., C.M., D.G., A.P., M.S., G.M.H.), University Hospital of Larissa, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece; Second Department of Neurology (C.Z., A.T., P.Z., A.B., K.V., G.T.), “Attikon” University Hospital, School of Medicine, National and Kapodistrian University of Athens, 12462 Athens, Greece; Department of Neurology (G.M.H.), Medical School, University of Cyprus, Nicosia, Cyprus; Department of Hygiene and Epidemiology (K.D., C.H.), Faculty of Medicine, University of Thessaly, Larissa, Greece; Department of Medical Oncology (P.N.), University Hospital of Ioannina, Ioannina, Greece; Department of Neurology (P.S.), Mediterraneo Hospital, Glyfada, Athens, Greece; Histopathological Department (C.N., S.S.), Hippokration General Hospital Thessaloniki; and Department of Neurology (G.P.P.), School of Medicine, National and Kapodistrian University of Athens, Eginition Hospital, Athens, Greece. Disclosure Scientific Advisory Boards: 1. NONE Gifts: 1. NONE Funding for Travel or Speaker Honoraria: 1. NONE Editorial Boards: 1. NONE Patents: 1. NONE Publishing Royalties: 1. NONE Employment, Commercial Entity: 1. NONE Consultancies: 1. NONE Speakers' Bureaus: 1. NONE Other Activities: 1. NONE Clinical Procedures or Imaging Studies: 1. NONE Research Support, Commercial Entities: 1. NONE Research Support, Government Entities: 1. NONE Research Support, Academic Entities: 1. NONE Research Support, Foundations and Societies: 1. NONE Stock/stock Options/board of Directors Compensation: 1. NONE License Fee Payments, Technology or Inventions: 1. NONE Royalty Payments, Technology or Inventions: 1. NONE Stock/stock Options, Research Sponsor: 1. NONE Stock/stock Options, Medical Equipment & Materials: 1. NONE Legal Proceedings: 1. NONE View all articles by this author Stergios Stergiou, MD From the Department of Neurology (G.X., C.M., D.G., A.P., M.S., G.M.H.), University Hospital of Larissa, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece; Second Department of Neurology (C.Z., A.T., P.Z., A.B., K.V., G.T.), “Attikon” University Hospital, School of Medicine, National and Kapodistrian University of Athens, 12462 Athens, Greece; Department of Neurology (G.M.H.), Medical School, University of Cyprus, Nicosia, Cyprus; Department of Hygiene and Epidemiology (K.D., C.H.), Faculty of Medicine, University of Thessaly, Larissa, Greece; Department of Medical Oncology (P.N.), University Hospital of Ioannina, Ioannina, Greece; Department of Neurology (P.S.), Mediterraneo Hospital, Glyfada, Athens, Greece; Histopathological Department (C.N., S.S.), Hippokration General Hospital Thessaloniki; and Department of Neurology (G.P.P.), School of Medicine, National and Kapodistrian University of Athens, Eginition Hospital, Athens, Greece. Disclosure Scientific Advisory Boards: 1. NONE Gifts: 1. NONE Funding for Travel or Speaker Honoraria: 1. NONE Editorial Boards: 1. NONE Patents: 1. NONE Publishing Royalties: 1. NONE Employment, Commercial Entity: 1. NONE Consultancies: 1. NONE Speakers' Bureaus: 1. NONE Other Activities: 1. NONE Clinical Procedures or Imaging Studies: 1. NONE Research Support, Commercial Entities: 1. NONE Research Support, Government Entities: 1. NONE Research Support, Academic Entities: 1. NONE Research Support, Foundations and Societies: 1. NONE Stock/stock Options/board of Directors Compensation: 1. NONE License Fee Payments, Technology or Inventions: 1. NONE Royalty Payments, Technology or Inventions: 1. NONE Stock/stock Options, Research Sponsor: 1. NONE Stock/stock Options, Medical Equipment & Materials: 1. NONE Legal Proceedings: 1. NONE View all articles by this author Panagiotis Ntellas, MD From the Department of Neurology (G.X., C.M., D.G., A.P., M.S., G.M.H.), University Hospital of Larissa, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece; Second Department of Neurology (C.Z., A.T., P.Z., A.B., K.V., G.T.), “Attikon” University Hospital, School of Medicine, National and Kapodistrian University of Athens, 12462 Athens, Greece; Department of Neurology (G.M.H.), Medical School, University of Cyprus, Nicosia, Cyprus; Department of Hygiene and Epidemiology (K.D., C.H.), Faculty of Medicine, University of Thessaly, Larissa, Greece; Department of Medical Oncology (P.N.), University Hospital of Ioannina, Ioannina, Greece; Department of Neurology (P.S.), Mediterraneo Hospital, Glyfada, Athens, Greece; Histopathological Department (C.N., S.S.), Hippokration General Hospital Thessaloniki; and Department of Neurology (G.P.P.), School of Medicine, National and Kapodistrian University of Athens, Eginition Hospital, Athens, Greece. Disclosure Scientific Advisory Boards: 1. NONE Gifts: 1. NONE Funding for Travel or Speaker Honoraria: 1. NONE Editorial Boards: 1. NONE Patents: 1. NONE Publishing Royalties: 1. NONE Employment, Commercial Entity: 1. NONE Consultancies: 1. NONE Speakers' Bureaus: 1. NONE Other Activities: 1. NONE Clinical Procedures or Imaging Studies: 1. NONE Research Support, Commercial Entities: 1. NONE Research Support, Government Entities: 1. NONE Research Support, Academic Entities: 1. NONE Research Support, Foundations and Societies: 1. NONE Stock/stock Options/board of Directors Compensation: 1. NONE License Fee Payments, Technology or Inventions: 1. NONE Royalty Payments, Technology or Inventions: 1. NONE Stock/stock Options, Research Sponsor: 1. NONE Stock/stock Options, Medical Equipment & Materials: 1. NONE Legal Proceedings: 1. NONE View all articles by this author Maria Sokratous, MD, MSc From the Department of Neurology (G.X., C.M., D.G., A.P., M.S., G.M.H.), University Hospital of Larissa, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece; Second Department of Neurology (C.Z., A.T., P.Z., A.B., K.V., G.T.), “Attikon” University Hospital, School of Medicine, National and Kapodistrian University of Athens, 12462 Athens, Greece; Department of Neurology (G.M.H.), Medical School, University of Cyprus, Nicosia, Cyprus; Department of Hygiene and Epidemiology (K.D., C.H.), Faculty of Medicine, University of Thessaly, Larissa, Greece; Department of Medical Oncology (P.N.), University Hospital of Ioannina, Ioannina, Greece; Department of Neurology (P.S.), Mediterraneo Hospital, Glyfada, Athens, Greece; Histopathological Department (C.N., S.S.), Hippokration General Hospital Thessaloniki; and Department of Neurology (G.P.P.), School of Medicine, National and Kapodistrian University of Athens, Eginition Hospital, Athens, Greece. Disclosure Scientific Advisory Boards: 1. NONE Gifts: 1. NONE Funding for Travel or Speaker Honoraria: 1. NONE Editorial Boards: 1. NONE Patents: 1. NONE Publishing Royalties: 1. NONE Employment, Commercial Entity: 1. NONE Consultancies: 1. NONE Speakers' Bureaus: 1. NONE Other Activities: 1. NONE Clinical Procedures or Imaging Studies: 1. NONE Research Support, Commercial Entities: 1. NONE Research Support, Government Entities: 1. NONE Research Support, Academic Entities: 1. NONE Research Support, Foundations and Societies: 1. NONE Stock/stock Options/board of Directors Compensation: 1. NONE License Fee Payments, Technology or Inventions: 1. NONE Royalty Payments, Technology or Inventions: 1. NONE Stock/stock Options, Research Sponsor: 1. NONE Stock/stock Options, Medical Equipment & Materials: 1. NONE Legal Proceedings: 1. NONE View all articles by this author Pantelis Stathis, MD, PhD From the Department of Neurology (G.X., C.M., D.G., A.P., M.S., G.M.H.), University Hospital of Larissa, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece; Second Department of Neurology (C.Z., A.T., P.Z., A.B., K.V., G.T.), “Attikon” University Hospital, School of Medicine, National and Kapodistrian University of Athens, 12462 Athens, Greece; Department of Neurology (G.M.H.), Medical School, University of Cyprus, Nicosia, Cyprus; Department of Hygiene and Epidemiology (K.D., C.H.), Faculty of Medicine, University of Thessaly, Larissa, Greece; Department of Medical Oncology (P.N.), University Hospital of Ioannina, Ioannina, Greece; Department of Neurology (P.S.), Mediterraneo Hospital, Glyfada, Athens, Greece; Histopathological Department (C.N., S.S.), Hippokration General Hospital Thessaloniki; and Department of Neurology (G.P.P.), School of Medicine, National and Kapodistrian University of Athens, Eginition Hospital, Athens, Greece. Disclosure Scientific Advisory Boards: 1. NONE Gifts: 1. NONE Funding for Travel or Speaker Honoraria: 1. NONE Editorial Boards: 1. NONE Patents: 1. NONE Publishing Royalties: 1. NONE Employment, Commercial Entity: 1. NONE Consultancies: 1. NONE Speakers' Bureaus: 1. NONE Other Activities: 1. NONE Clinical Procedures or Imaging Studies: 1. NONE Research Support, Commercial Entities: 1. NONE Research Support, Government Entities: 1. NONE Research Support, Academic Entities: 1. NONE Research Support, Foundations and Societies: 1. NONE Stock/stock Options/board of Directors Compensation: 1. NONE License Fee Payments, Technology or Inventions: 1. NONE Royalty Payments, Technology or Inventions: 1. NONE Stock/stock Options, Research Sponsor: 1. NONE Stock/stock Options, Medical Equipment & Materials: 1. NONE Legal Proceedings: 1. NONE View all articles by this author Georgios P.Paraskevas, MD, PhD From the Department of Neurology (G.X., C.M., D.G., A.P., M.S., G.M.H.), University Hospital of Larissa, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece; Second Department of Neurology (C.Z., A.T., P.Z., A.B., K.V., G.T.), “Attikon” University Hospital, School of Medicine, National and Kapodistrian University of Athens, 12462 Athens, Greece; Department of Neurology (G.M.H.), Medical School, University of Cyprus, Nicosia, Cyprus; Department of Hygiene and Epidemiology (K.D., C.H.), Faculty of Medicine, University of Thessaly, Larissa, Greece; Department of Medical Oncology (P.N.), University Hospital of Ioannina, Ioannina, Greece; Department of Neurology (P.S.), Mediterraneo Hospital, Glyfada, Athens, Greece; Histopathological Department (C.N., S.S.), Hippokration General Hospital Thessaloniki; and Department of Neurology (G.P.P.), School of Medicine, National and Kapodistrian University of Athens, Eginition Hospital, Athens, Greece. Disclosure Scientific Advisory Boards: 1. NONE Gifts: 1. NONE Funding for Travel or Speaker Honoraria: 1. NONE Editorial Boards: 1. "Cerebral Circulation Cognition and Behavior", Associate Editor, since 2019 (No compensation) Patents: 1. NONE Publishing Royalties: 1. NONE Employment, Commercial Entity: 1. NONE Consultancies: 1. NONE Speakers' Bureaus: 1. NONE Other Activities: 1. NONE Clinical Procedures or Imaging Studies: 1. NONE Research Support, Commercial Entities: 1. NONE Research Support, Government Entities: 1. NONE Research Support, Academic Entities: 1. NONE Research Support, Foundations and Societies: 1. NONE Stock/stock Options/board of Directors Compensation: 1. NONE License Fee Payments, Technology or Inventions: 1. NONE Royalty Payments, Technology or Inventions: 1. NONE Stock/stock Options, Research Sponsor: 1. NONE Stock/stock Options, Medical Equipment & Materials: 1. NONE Legal Proceedings: 1. NONE View all articles by this author Anastasios Bonakis, MD, PhD From the Department of Neurology (G.X., C.M., D.G., A.P., M.S., G.M.H.), University Hospital of Larissa, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece; Second Department of Neurology (C.Z., A.T., P.Z., A.B., K.V., G.T.), “Attikon” University Hospital, School of Medicine, National and Kapodistrian University of Athens, 12462 Athens, Greece; Department of Neurology (G.M.H.), Medical School, University of Cyprus, Nicosia, Cyprus; Department of Hygiene and Epidemiology (K.D., C.H.), Faculty of Medicine, University of Thessaly, Larissa, Greece; Department of Medical Oncology (P.N.), University Hospital of Ioannina, Ioannina, Greece; Department of Neurology (P.S.), Mediterraneo Hospital, Glyfada, Athens, Greece; Histopathological Department (C.N., S.S.), Hippokration General Hospital Thessaloniki; and Department of Neurology (G.P.P.), School of Medicine, National and Kapodistrian University of Athens, Eginition Hospital, Athens, Greece. Disclosure Scientific Advisory Boards: 1. NONE Gifts: 1. NONE Funding for Travel or Speaker Honoraria: 1. UCB, speaker honoraria Editorial Boards: 1. NONE Patents: 1. NONE Publishing Royalties: 1. NONE Employment, Commercial Entity: 1. NONE Consultancies: 1. NONE Speakers' Bureaus: 1. NONE Other Activities: 1. NONE Clinical Procedures or Imaging Studies: 1. NONE Research Support, Commercial Entities: 1. NONE Research Support, Government Entities: 1. NONE Research Support, Academic Entities: 1. NONE Research Support, Foundations and Societies: 1. NONE Stock/stock Options/board of Directors Compensation: 1. NONE License Fee Payments, Technology or Inventions: 1. NONE Royalty Payments, Technology or Inventions: 1. NONE Stock/stock Options, Research Sponsor: 1. NONE Stock/stock Options, Medical Equipment & Materials: 1. NONE Legal Proceedings: 1. NONE View all articles by this author Konstantinos Voumvourakis, MD, PhD From the Department of Neurology (G.X., C.M., D.G., A.P., M.S., G.M.H.), University Hospital of Larissa, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece; Second Department of Neurology (C.Z., A.T., P.Z., A.B., K.V., G.T.), “Attikon” University Hospital, School of Medicine, National and Kapodistrian University of Athens, 12462 Athens, Greece; Department of Neurology (G.M.H.), Medical School, University of Cyprus, Nicosia, Cyprus; Department of Hygiene and Epidemiology (K.D., C.H.), Faculty of Medicine, University of Thessaly, Larissa, Greece; Department of Medical Oncology (P.N.), University Hospital of Ioannina, Ioannina, Greece; Department of Neurology (P.S.), Mediterraneo Hospital, Glyfada, Athens, Greece; Histopathological Department (C.N., S.S.), Hippokration General Hospital Thessaloniki; and Department of Neurology (G.P.P.), School of Medicine, National and Kapodistrian University of Athens, Eginition Hospital, Athens, Greece. Disclosure Scientific Advisory Boards: 1. NONE Gifts: 1. NONE Funding for Travel or Speaker Honoraria: 1. NONE Editorial Boards: 1. NONE Patents: 1. NONE Publishing Royalties: 1. NONE Employment, Commercial Entity: 1. NONE Consultancies: 1. NONE Speakers' Bureaus: 1. NONE Other Activities: 1. NONE Clinical Procedures or Imaging Studies: 1. NONE Research Support, Commercial Entities: 1. NONE Research Support, Government Entities: 1. NONE Research Support, Academic Entities: 1. NONE Research Support, Foundations and Societies: 1. NONE Stock/stock Options/board of Directors Compensation: 1. NONE License Fee Payments, Technology or Inventions: 1. NONE Royalty Payments, Technology or Inventions: 1. NONE Stock/stock Options, Research Sponsor: 1. NONE Stock/stock Options, Medical Equipment & Materials: 1. NONE Legal Proceedings: 1. NONE View all articles by this author Christos Hadjichristodoulou, MD, PhD From the Department of Neurology (G.X., C.M., D.G., A.P., M.S., G.M.H.), University Hospital of Larissa, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece; Second Department of Neurology (C.Z., A.T., P.Z., A.B., K.V., G.T.), “Attikon” University Hospital, School of Medicine, National and Kapodistrian University of Athens, 12462 Athens, Greece; Department of Neurology (G.M.H.), Medical School, University of Cyprus, Nicosia, Cyprus; Department of Hygiene and Epidemiology (K.D., C.H.), Faculty of Medicine, University of Thessaly, Larissa, Greece; Department of Medical Oncology (P.N.), University Hospital of Ioannina, Ioannina, Greece; Department of Neurology (P.S.), Mediterraneo Hospital, Glyfada, Athens, Greece; Histopathological Department (C.N., S.S.), Hippokration General Hospital Thessaloniki; and Department of Neurology (G.P.P.), School of Medicine, National and Kapodistrian University of Athens, Eginition Hospital, Athens, Greece. Disclosure Scientific Advisory Boards: 1. NONE Gifts: 1. NONE Funding for Travel or Speaker Honoraria: 1. NONE Editorial Boards: 1. In July 2011 I became a Member of the Editorial Board of the Journal ISRN Public Health. In 2010 I became a Member of the Editorial Board of the Journal International Maritime Health (IMH). Patents: 1. NONE Publishing Royalties: 1. NONE Employment, Commercial Entity: 1. NONE Consultancies: 1. NONE Speakers' Bureaus: 1. NONE Other Activities: 1. NONE Clinical Procedures or Imaging Studies: 1. NONE Research Support, Commercial Entities: 1. NONE Research Support, Government Entities: 1. NONE Research Support, Academic Entities: 1. NONE Research Support, Foundations and Societies: 1. NONE Stock/stock Options/board of Directors Compensation: 1. NONE License Fee Payments, Technology or Inventions: 1. NONE Royalty Payments, Technology or Inventions: 1. NONE Stock/stock Options, Research Sponsor: 1. NONE Stock/stock Options, Medical Equipment & Materials: 1. NONE Legal Proceedings: 1. NONE View all articles by this author Georgios M.Hadjigeorgiou, MD, PhD From the Department of Neurology (G.X., C.M., D.G., A.P., M.S., G.M.H.), University Hospital of Larissa, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece; Second Department of Neurology (C.Z., A.T., P.Z., A.B., K.V., G.T.), “Attikon” University Hospital, School of Medicine, National and Kapodistrian University of Athens, 12462 Athens, Greece; Department of Neurology (G.M.H.), Medical School, University of Cyprus, Nicosia, Cyprus; Department of Hygiene and Epidemiology (K.D., C.H.), Faculty of Medicine, University of Thessaly, Larissa, Greece; Department of Medical Oncology (P.N.), University Hospital of Ioannina, Ioannina, Greece; Department of Neurology (P.S.), Mediterraneo Hospital, Glyfada, Athens, Greece; Histopathological Department (C.N., S.S.), Hippokration General Hospital Thessaloniki; and Department of Neurology (G.P.P.), School of Medicine, National and Kapodistrian University of Athens, Eginition Hospital, Athens, Greece. Disclosure Scientific Advisory Boards: 1. NONE Gifts: 1. NONE Funding for Travel or Speaker Honoraria: 1. NONE Editorial Boards: 1. Editorial Advisory Board for the journal "Neurological Sciences" starting from Feb 2018. Patents: 1. NONE Publishing Royalties: 1. NONE Employment, Commercial Entity: 1. NONE Consultancies: 1. NONE Speakers' Bureaus: 1. NONE Other Activities: 1. NONE Clinical Procedures or Imaging Studies: 1. NONE Research Support, Commercial Entities: 1. NONE Research Support, Government Entities: 1. NONE Research Support, Academic Entities: 1. NONE Research Support, Foundations and Societies: 1. NONE Stock/stock Options/board of Directors Compensation: 1. NONE License Fee Payments, Technology or Inventions: 1. NONE Royalty Payments, Technology or Inventions: 1. NONE Stock/stock Options, Research Sponsor: 1. NONE Stock/stock Options, Medical Equipment & Materials: 1. NONE Legal Proceedings: 1. NONE View all articles by this author Georgios Tsivgoulis, MD, PhD From the Department of Neurology (G.X., C.M., D.G., A.P., M.S., G.M.H.), University Hospital of Larissa, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece; Second Department of Neurology (C.Z., A.T., P.Z., A.B., K.V., G.T.), “Attikon” University Hospital, School of Medicine, National and Kapodistrian University of Athens, 12462 Athens, Greece; Department of Neurology (G.M.H.), Medical School, University of Cyprus, Nicosia, Cyprus; Department of Hygiene and Epidemiology (K.D., C.H.), Faculty of Medicine, University of Thessaly, Larissa, Greece; Department of Medical Oncology (P.N.), University Hospital of Ioannina, Ioannina, Greece; Department of Neurology (P.S.), Mediterraneo Hospital, Glyfada, Athens, Greece; Histopathological Department (C.N., S.S.), Hippokration General Hospital Thessaloniki; and Department of Neurology (G.P.P.), School of Medicine, National and Kapodistrian University of Athens, Eginition Hospital, Athens, Greece. Disclosure Scientific Advisory Boards: 1. NONE Gifts: 1. NONE Funding for Travel or Speaker Honoraria: 1. NONE Editorial Boards: 1. Journal of Neuroimaging, Associate Editor for Neurosonology, 2009- Stroke, Editorial Board Member, 2015- Patents: 1. NONE Publishing Royalties: 1. NONE Employment, Commercial Entity: 1. NONE Consultancies: 1. NONE Speakers' Bureaus: 1. NONE Other Activities: 1. NONE Clinical Procedures or Imaging Studies: 1. NONE Research Support, Commercial Entities: 1. NONE Research Support, Government Entities: 1. NONE Research Support, Academic Entities: 1. NONE Research Support, Foundations and Societies: 1. NONE Stock/stock Options/board of Directors Compensation: 1. NONE License Fee Payments, Technology or Inventions: 1. NONE Royalty Payments, Technology or Inventions: 1. NONE Stock/stock Options, Research Sponsor: 1. NONE Stock/stock Options, Medical Equipment & Materials: 1. NONE Legal Proceedings: 1. NONE View all articles by this author Notes Correspondence Dr. Xiromerisiou georgiaxiromerisiou@gmail.com Go to Neurology.org/NG for full disclosures. Funding informaton is provided at the end of the article. The Article Processing Charge was funded by the authors. Metrics & Citations Metrics Citations 23 Metrics Article Metrics Accesses Citations No data available. 1,194 21 Total 6 Months 12 Months Total number of accesses and citations for the most recent 6 whole calendar months. Citation information is sourced from Crossref Cited-by service. Citations Download Citations If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Select your manager software from the list below and click Download. Format [x] Direct Import Cited By Ioannis Zaganas, Ioannis Tsiverdis, Evgenia Kokosali, Ionela Litso, Minas Drakos, Irene Skoula, Alexandros Zabetakis, Lambros Mathioudakis, Vassilios Mastorodemos, Panayiotis Mitsias, Study of the NOTCH3 Gene Reveals the First CADASIL Cases in Crete and a Novel Pathogenic Variant , Brain and Behavior, 15, 9, (2025). Crossref Favour Felix-Ilemhenbhio, Klaudia Kocsy, Mimoun Azzouz, Arshad Majid, The role of NOTCH3 in CADASIL pathogenesis: insights into novel therapies, Brain Research, 1863, (149754), (2025). Crossref Yuan Cao, Ding-Ding Zhang, Fei Han, Nan Jiang, Ming Yao, Yi-Cheng Zhu, Phenotypes Associated with NOTCH3 Cysteine-Sparing Mutations in Patients with Clinical Suspicion of CADASIL: A Systematic Review, International Journal of Molecular Sciences, 25, 16, (8796), (2024). Crossref Marina Blanco-Ruiz, Jeffrey L. Saver, Distinctive feeding vessel architecture and sparse collateralization may underlie the characteristic early temporal pole and external capsule white matter compromise in CADASIL, Medical Hypotheses, 183, (111267), (2024). Crossref Jae‐Sung Lim, Keon‐Joo Lee, Beom Joon Kim, Wi‐Sun Ryu, Jinyong Chung, Dong‐Seok Gwak, Ji Sung Lee, Seong‐Eun Kim, Eunvin Ko, Juneyoung Lee, Moon‐Ku Han, Eric E. Smith, Dong‐Eog Kim, Hee‐Joon Bae, Nonhypertensive White Matter Hyperintensities in Stroke: Risk Factors, Neuroimaging Characteristics, and Prognosis, Journal of the American Heart Association, 12, 23, (2023). Crossref Zeynep Selcan Şanli, Özlem Anlaş, Detecting a Novel NOTCH3 Variant in Patients with Suspected CADASIL: A Single Center Study, Molecular Syndromology, 15, 2, (89-95), (2023). Crossref Soo Jung Lee, Xiaojie Zhang, Emily Wu, Richard Sukpraphrute, Catherine Sukpraphrute, Andrew Ye, Michael M. Wang, Structural changes in NOTCH3 induced by CADASIL mutations: Role of cysteine and non-cysteine alterations, Journal of Biological Chemistry, 299, 6, (104838), (2023). Crossref Renata Nogueira, Christian Marques Couto, Pérola de Oliveira, Bernardo José Alves Ferreira Martins, Vinícius Viana Abreu Montanaro, Clinical and epidemiological profiles from a case series of 26 Brazilian CADASIL patients, Arquivos de Neuro-Psiquiatria, 81, 05, (417-425), (2023). Crossref Shino Magaki, Zesheng Chen, Alyscia Severance, Christopher K Williams, Ramiro Diaz, Chuo Fang, Negar Khanlou, William H Yong, Annlia Paganini-Hill, Rajesh N Kalaria, Harry V Vinters, Mark Fisher, Neuropathology of microbleeds in cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL), Journal of Neuropathology & Experimental Neurology, 82, 4, (333-344), (2023). Crossref Simona Hankeova, Noemi Van Hul, Jakub Laznovsky, Elisabeth Verboven, Katrin Mangold, Naomi Hensens, Csaba Adori, Elvira Verhoef, Tomas Zikmund, Feven Dawit, Michaela Kavkova, Jakub Salplachta, Marika Sjöqvist, Bengt R Johansson, Mohamed G Hassan, Linda Fredriksson, Karsten Baumgärtel, Vitezslav Bryja, Urban Lendahl, Andrew Jheon, Florian Alten, Kristina Teär Fahnehjelm, Björn Fischler, Jozef Kaiser, Emma R Andersson, Sex differences and risk factors for bleeding in Alagille syndrome, EMBO Molecular Medicine, 14, 12, (2022). Crossref See more Loading... View Options View options PDF and All Supplements Download PDF and Supplementary Material Download is in progress Full Text View Full Text Figures Open all in viewer Figure 1 Brain MRI scan (A–C) of the patient showing extensive leukoencephalopathy, mainly in the frontal lobes, the anterior temporal lobe, the external capsules, and the basal ganglia in the FLAIR sequences (D–F) of the daughter showing multiple hyperintensities in the FLAIR sequences Go to FigureOpen in Viewer Figure 2 Flow chart depicting our literature search and study selection for the systematic review Go to FigureOpen in Viewer Figure 3 A world map showing the frequencies of CADASIL mutations across various countries according to our reviewed cases Go to FigureOpen in Viewer Figure 4 The distribution of the most common mutations across the NOTCH3 gene Go to FigureOpen in Viewer Figure 5 Univariate analysis showing the effect of pathogenicity score of the mutations on the age at onset of the disorder Go to FigureOpen in Viewer Figure 6 Bar chart showing the differences between men and women regarding phenotype severity Phenotype severity-sex, p value = 0.003. Go to FigureOpen in Viewer Figure 7 Box plot depicting the distribution of phenotypes of CADASIL across age Phenotype severity-age, p< 0.001. Go to FigureOpen in Viewer Figure 8 Bar chart showing the distribution of mutations across the exons of the gene according to their pathogenicity score Εxon-pathogenicity score, p value< 0.001. Go to FigureOpen in Viewer Tables Open all in viewer Go to TableOpen in Viewer Media Share Share Share article link Copy Link Copied! Copying failed. Share FacebookX (formerly Twitter)LinkedInemail References References 1. Tournier-Lasserve E, Joutel A, Melki J, et al. Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy maps to chromosome 19q12. Nat Genet 1993;3:256–259. Go to Citation Crossref PubMed Google Scholar 2. Van Bogaert L. Encéphalopathie sous corticale progressive (Binswanger) à évolutionrapide chez des soeurs. Méd Hellen 1955;24:961–972. Go to Citation Google Scholar 3. Federico A, Bianchi S, Dotti MT. The spectrum of mutations for CADASIL diagnosis. Neurol Sci 2005;26:117–124. PubMed Google Scholar a [...] on chromosome 19. b [...] which eventually lead to severe dementia. 4. Tan RYY, Markus HS. CADASIL: migraine, encephalopathy, stroke and their inter relationships. PLoS One 2016;11:e0157613. Crossref PubMed Google Scholar a [...] and the aorta apart from the brain. b [...] with the development of disability. 5. O'Sullivan M, Jarosz JM, Martin RJ, et al. MRI hyperintensities of the temporal lobe and external capsule in patients with CADASIL. Neurology 2001;56:628–634. Go to Citation Crossref PubMed Google Scholar 6. Craggs L, Yamamoto Y, Ihara M, et al. White matter pathology and disconnection in the frontal lobe in cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL). Neuropathol Appl Neurobiol 2014;40:591–602. Go to Citation Crossref PubMed Google Scholar 7. Goebel HH, Meyermann R, Rosin R, Schlote W. Characteristic morphologic manifestation of CADASIL, cerebral autosomal-dominant arteriopathy with subcortical infarcts and leukoencephalopathy, in skeletal muscle and skin. Muscle Nerve 1997;20:625–627. Go to Citation Crossref PubMed Google Scholar 8. Markus HS, Martin RJ, Simpson MA, et al. Diagnostic strategies in CADASIL. Neurology 2002;59:1134–1138. Go to Citation Crossref PubMed Google Scholar 9. Tikka S, Mykkänen K, Ruchoux M-M, et al. Congruence between NOTCH3 mutations and GOM in 131 CADASIL patients. Brain 2009;132:933–939. Go to Citation Crossref PubMed Google Scholar 10. Joutel A, Corpechot C, Ducros A, et al. Notch3 mutations in CADASIL, a hereditary adult-onset condition causing stroke and dementia. Nature 1996;383:707–710. Go to Citation Crossref PubMed Google Scholar 11. Monet-Leprêtre M, Bardot B, Lemaire B, et al. Distinct phenotypic and functional features of CADASIL mutations in the Notch3 ligand binding domain. Brain A J Neurol 2009;132(pt 6):1601–1612. Go to Citation PubMed Google Scholar 12. Muiño E, Gallego-Fabrega C, Cullell N, et al. Systematic review of cysteine-sparing NOTCH3 missense mutations in patients with clinical suspicion of CADASIL. Int J Mol Sci 2017;18:1964. Go to Citation Crossref PubMed Google Scholar 13. Hack R, Rutten J, Lesnik Oberstein SA (1993). CADASIL. 2000 Mar 15 [Updated 2019 Mar 14]. In: Adam MP, Ardinger HH, Pagon RA, et al. (editors), GeneReviews® [Internet]. Seattle (WA): University of Washington, Seattle; 1993-2020. Available from: Go to Citation Google Scholar 14. Rentzsch P, Witten D, Cooper GM, Shendure J, Kircher M. CADD: Predicting the deleteriousness of variants throughout the human genome. Nucleic Acids Research 2019;47:D886–D894. Go to Citation PubMed Google Scholar 15. Human Splicing Finder—Version 3.1. (n.d.). Available at: umd.be/HSF/. Accessed October 13, 2019. Go to Citation Google Scholar 16. Berkeley's stats website, Statistics for Bioinformatics, Available at: stat.berkeley.edu/users/mgoldman. Accessed September 2019. Go to Citation Google Scholar 17. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B (Methodological) 1995;57:289–300. Go to Citation Google Scholar 18. Eswaradass VP, Ramasamy B, Kalidoss R, Gnanagurusamy G. Cadasil coma: unusual cause for acute encephalopathy. Ann Indian Acad Neurol 2015;18:483–484. Go to Citation PubMed Google Scholar 19. Ragno M, Pianese L, Morroni M, et al. “CADASIL coma” in an Italian homozygous CADASIL patient: comparison with clinical and MRI findings in age-matched heterozygous patients with the same G528C NOTCH3 mutation. Neurol Sci 2013;34:1947–1953. Go to Citation Crossref PubMed Google Scholar 20. Lackovic V, Bajcetic M, Lackovic M, et al. Skin and sural nerve biopsies: ultrastructural findings in the first genetically confirmed cases of CADASIL in Serbia. Ultrastructural Pathol 2012;36:325–335. Go to Citation Crossref PubMed Google Scholar 21. Santa Y, Uyama E, Chui DH, et al. Genetic, clinical and pathological studies of CADASIL in Japan: a partial contribution of Notch3 mutations and implications of smooth muscle cell degeneration for the pathogenesis. J Neurol Sci 2003;212:79–84. Crossref PubMed Google Scholar a [...] be the pathogenic mechanism of the disease. b [...] the age at onset or phenotype severity. 22. Wollenweber FA, Hanecker P, Bayer-Karpinska A, et al. Cysteine-sparing CADASIL mutations in NOTCH3 show proaggregatory properties in vitro. Stroke 2015;46:786–792. Go to Citation Crossref PubMed Google Scholar 23. Matsushima T, Conedera S, Tanaka R, et al. Genotype–phenotype correlations of cysteine replacement in CADASIL. Neurobiol Aging 2017;50:169.e7–169.e14. Go to Citation Crossref PubMed Google Scholar 24. Ueda A, Ueda M, Nagatoshi A, et al. Genotypic and phenotypic spectrum of CADASIL in Japan: the experience at a referral center in Kumamoto University from 1997 to 2014. J Neurol 2015;262:1828–1836. Go to Citation Crossref PubMed Google Scholar 25. Lee YC, Liu CS, Chang MH, et al. Population-specific spectrum of NOTCH3 mutations, MRI features and founder effect of CADASIL in Chinese. J Neurol 2009;256:249–255. Go to Citation Crossref PubMed Google Scholar 26. Rutten JW, Dauwerse HG, Gravesteijn G, et al. Archetypal NOTCH3 mutations frequent in public exome: implications for CADASIL. Ann Clin Translational Neurol 2016;3:844–853, Go to Citation Crossref PubMed Google Scholar 27. Liu X, Zuo Y, Sun W, et al. The genetic spectrum and the evaluation of CADASIL screening scale in Chinese patients with NOTCH3 mutations. J Neurol Sci 2015;354:63–69. Go to Citation Crossref PubMed Google Scholar 28. Dichgans M, Mayer M, Uttner I, et al. The phenotypic spectrum of CADASIL: clinical findings in 102 cases. Ann Neurol 1998;44:731–739. Go to Citation Crossref PubMed Google Scholar 29. Gustavsen WR, Reinholt FP, Schlosser A. Skin biopsy findings and results of neuropsychological testing in the first confirmed cases of CADASIL in Norway. Eur J Neurol 2006;13:359–362. Go to Citation PubMed Google Scholar 30. Morroni M, Marzioni D, Ragno M, et al. Role of electron microscopy in the diagnosis of cadasil syndrome: a study of 32 patients. PLoS One 2013;8:e65482. Go to Citation Crossref PubMed Google Scholar 31. Dziewulska D, Sulejczak D, Wężyk M. What factors determine phenotype of cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL)? Considerations in the context of a novel pathogenic R110C mutation in the NOTCH3 gene. Folia Neuropathologica 2017;55:295–300. Go to Citation PubMed Google Scholar 32. Shi Y, Li S, Li W, et al. MRI lesion load of cerebral small vessel disease and cognitive impairment in patients with CADASIL. Front Neurol 2018;9:862. Go to Citation Crossref PubMed Google Scholar 33. Puy L, De Guio F, Godin O, et al. Cerebral microbleeds and the risk of incident ischemic stroke in CADASIL (cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy). Stroke 2017;48:2699–2703. Go to Citation Crossref PubMed Google Scholar 34. Wang Z, Yuan Y, Zhang W, et al. NOTCH3 mutations and clinical features in 33 mainland Chinese families with CADASIL. J Neurol Neurosurg Psychiatry 2011;82:534–539. Go to Citation Crossref PubMed Google Scholar 35. Guey S, Mawet J, Hervé D, et al. Prevalence and characteristics of migraine in CADASIL. Cephalalgia 2016;36:1038–1047. Crossref PubMed Google Scholar a [...] patients and an isolated symptom in 12%. b [...] 85% of symptomatic individuals. 36. Burkett JG, Dougherty C. Recognizing CADASIL: a secondary cause of migraine with aura. Curr Pain Headache Rep 2017;21:21. Go to Citation PubMed Google Scholar 37. Gunda B, Hervé D, Godin O, et al. Effects of gender on the phenotype of CADASIL. Stroke 2012;43:137–141. Go to Citation Crossref PubMed Google Scholar 38. Yoon CW, Kim YE, Seo SW, et al. NOTCH3 variants in patients with subcortical vascular cognitive impairment: a comparison with typical CADASIL patients. Neurobiol Aging 2015;36:2443–2447. Go to Citation Google Scholar 39. Brookes RL, Hollocks MJ, Tan RY, Morris RG, Markus HS. Brief screening of vascular cognitive impairment in patients with cerebral autosomal-dominant arteriopathy with subcortical infarcts and leukoencephalopathy without dementia. Stroke 2016;47:2482–2487. Go to Citation Crossref PubMed Google Scholar 40. Spinicci G, Conti M, Cherchi MV, Mancosu C, Murru R, Carboni N Unusual clinical presentations in subjects carrying novel NOTCH3 gene mutations. J Stroke Cerebrovasc Dis 2013;22:539–544. Go to Citation PubMed Google Scholar 41. Feuerhake F, Volk B, Ostertag C, et al. Reversible coma with raised intracranial pressure: an unusual clinical manifestation of CADASIL. Acta Neuropathologica 2002;103:188–192. Go to Citation PubMed Google Scholar 42. Schon F, Martin RJ, Prevett M, Clough C, Enevoldson TP, Markus HS. “CADASIL coma”: an underdiagnosed acute encephalopathy. J Neurol Neurosurg Psychiatry 2003;74:249–252. Go to Citation PubMed Google Scholar 43. Cumurciuc R, Massin P, Pâques M, et al. Retinal abnormalities in CADASIL: a retrospective study of 18 patients. J Neurol Neurosurg Psychiatry 2004;75:1058–1060. Go to Citation PubMed Google Scholar Cover image: Ex vivo Micro-CT of EPAS1-gain-of-function Transgenic Mouse Model demonstrating faulty ossification of the posterior elements of the cervical and thoracic spine, specifically the spinous process; the transverse processes are similarly hypodense and there is a dysraphism of T1. See page e414 Volume 6 | Number 3 June 2020 advertisement Open Review Pilot Project Explore peer reviews, author responses, and editor comments on selected articles. Learn More Letters to the Editor The Letters section represents an opportunity for ongoing author debate and post-publication peer review. View our submission guidelines for Letters to the Editor before submitting your comment. View the Letters for this article Submit a Letter for this article Recommended Neurology® Views & Reviews 29 May 2019 CNS small vessel disease: A clinical review Rocco J. Cannistraro, Mohammed Badi, Benjamin H. Eidelman, Dennis W. Dickson, Erik H. Middlebrooks, and [...] James F. Meschia +2 authors Neurology® Research Article 23 May 2022 Open Access infographic Prevalence and Predictors of Vascular Cognitive Impairment in Patients With CADASIL Amy A. Jolly, Stefania Nannoni, Hayley Edwards, Robin G. Morris, and [...] Hugh S. Markus +1 authors Neurology® Research Article 31 May 2022 Open Access Association of NOTCH3 Variant Position With Stroke Onset and Other Clinical Features Among Patients With CADASIL Bernard P.H. Cho, Amy A. Jolly, Stefania Nannoni, Daniel Tozer, Steven Bell, and [...] Hugh S. Markus +2 authors Topics Cerebrovascular Disease/StrokeGeneticsHeadacheImagingMigraineMRI View full text|Download PDF Figures Tables Others Close figure viewer Back to article Figure title goes here Change zoom level Go to figure location within the article Download figure Toggle share panel Toggle share panel Share Toggle information panel Toggle information panel All figures All tables All others View all material View all material View all material xrefBack.goTo xrefBack.goTo Request permissions Expand All Collapse Expand Table Show all references SHOW ALL BOOKS Authors Info & Affiliations Now Reading: Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy revisited Genotype-phenotype correlations of all published cases Track Citations Add to favorites Share ###### PREVIOUS ARTICLE Expanding the phenotype of MTOR-related disorders and the Smith-Kingsmore syndrome Previous Skip slideshow ArticlesArticles Latest Articles Current Issue Past Issues AboutAbout About the Journals Advertise Browse Journals Contact Us Editorial Board Ethics Policies Authors and ReviewersAuthors and Reviewers Author Center Information for Reviewers Submit Manuscript SubscribersSubscribers Sign Up for eAlerts Subscribe Follow Us AAN.COM CONTINUUM BRAIN & LIFE NEUROLOGY TODAY Back to top © 2025 American Academy of Neurology Manage Cookie Preferences Advertise Privacy Policy Your California Privacy Choices __("articleCrossmark.closePopup") Your Privacy To give you the best possible experience we use cookies and similar technologies. We use data collected through these technologies for various purposes, including to enhance website functionality, remember your preferences, show the most relevant content, and show the most useful ads. You can select your preferences by clicking the link. For more information, please review ourPrivacy & Cookie Notice Manage Cookie Preferences Accept All Cookies Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device. Because we respect your right to privacy, you can choose not to allow certain types of cookies on our website. Click on the different category headings to find out more and manage your cookie preferences. However, blocking some types of cookies may impact your experience on the site and the services we are able to offer. Privacy & Cookie Notice Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function. They are usually set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, this may have an effect on the proper functioning of (parts of) the site. View Vendor Details‎ Performance Cookies [x] Performance Cookies These cookies support analytic services that measure and improve the performance of our site. They help us know which pages are the most and least popular and see how visitors move around the site. View Vendor Details‎ Advertising Cookies [x] Advertising Cookies These cookies may collect insights to issue personalized content and advertising on our own and other websites, and may be set through our site by third party providers. If you do not allow these cookies, you may still see basic advertising on your browser that is generic and not based on your interests. View Vendor Details‎ Vendors List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Allow All Reject All Confirm My Choices
188804
http://www.cse.unsw.edu.au/~haziz/diverse-commt.pdf
manuscript No. (will be inserted by the editor) A Rule for Committee Selection with Soft Diversity Constraints Haris Aziz Abstract Committee selection with diversity or distributional constraints is a ubiqui-tous problem. However, many of the formal approaches proposed so far have certain drawbacks including (1) computational intractability in general, and (2) inability to suggest a solution for instances where the hard constraints cannot be met. We propose a cubic-time algorithm for diverse committee selection that satisfies natural axioms and draws on the idea of using soft bounds. Keywords Social choice theory · committee voting · multi-winner voting · diversity constraints · computational complexity JEL Classification: C70 · D61 · D71 1 Introduction Selecting a target number of candidates is a ubiquitous problem that occurs in faculty hiring, scholarship selection, corporate board election, and formation of representa-tive bodies (Aziz et al., 2017a,b; Ratliff, 2006; Faliszewski et al., 2017). In many of these settings, there may be natural distributional constraints motivated for example by diversity. For example, in certain European countries, there is a requirement of having a minimum percentage of females in corporate boards. In some school admis-sion guidelines, there are quotas for less-advantaged groups. Finding the best set of candidates subject to diversity constraints has also been formally studied in social choice. In several works, the problem of diverse committee selection is viewed as the problem with candidates having different (possibly multi-ple) types and the committee having distributions constraints on each of the types (see e.g., (Brams and Potthoff, 1990; Bredereck et al., 2018; Celis et al., 2018; Potthoff, 1990; Straszak et al., 1993)). There are a few drawbacks of approaches that use hard H. Aziz UNSW Sydney and Data61 CSIRO, Sydney 2052, Australia Tel.: +61-2-8306 0490 Fax: +61-2-8306 0405 E-mail: haris.aziz@unsw.edu.au 2 distributional constraints. The drawbacks include the following: (1) there may be in-stances of the diverse committee selection problem that do not admit any feasible solution that satisfies the constraints (for example, there simply may not be enough female applicants) and (2) the hard constraints make the problem of committee se-lection computationally hard (for example, if we require that each type should have at least one representative, the problem of checking whether there exists a committee satisfying the requirement is NP-complete (see e.g., (Aziz et al., 2016))). Drawback (1) can especially arise if there are many types of applicants and there are diversity targets for all of them. For example, in large-scale admissions programs in India, there are quotas for several types of backgrounds. For approaches that are NP-hard, the lack of a simple polynomial-time algorithm may render them impractical for large enough instances. Even for smaller instances, these approaches cannot be used with-out resorting to a computer. Finally, not imposing hard constraints may allow for outcomes that are more preferred by more agents. Some other approaches consider distances between candidates or committees based on their type attributes and then view diversity not as a constraint but as an optimisation objective based on the distances (see e.g., (Kuo et al., 1993; Lang and Skowron, 2016)). The approaches do not generally take into account the excellence of the candidates and the underlying problems are NP-hard. Apart from imposing hard distributional constraints, another approach that is often used in real-life committee selection to achieve diversity is to give bonus points or ranking boosts to candidates who are from under-represented groups.1 When these rules are imposed centrally, they may come across as arbitrary fixes to solving diversity issues. If the decision makers or voters internally take diversity issues into account while formulating an objective linear ranking, it puts a cognitive burden on the voters to mix diversity pri-oritisation with objective excellence estimation. In this paper, we consider the committee selection problem with distributional constraints and focus on the most common constraints whereby at least certain frac-tion of the candidates should satisfy a given type.2 Our approach is to view the distri-butional constraints as soft constraints which should be satisfied as much as possible. Often real-life diversity guidelines need not be hard constraints but general rules of thumb to achieve procedural fairness. We present a simple cubic-time algorithm that simultaneously satisfies two axioms called type optimality and justified envy-freeness. The axiom justified envy-freeness is inspired by the matching market literature. The combination of the two axioms can be viewed as finding a committee that is as close to satisfying the hard distributional constraints and also selecting the best candidates. 2 Setup The setting involves a set of candidates C, a weak order ≿over C, a set of types T, a matrix τ that specifies whether a candidate is of a certain type, and a vector q that 1 The model where synergies or presence of diverse agents provide additional points to the committee has been considered in a general model by Izsak et al. (2018). 2 More general models also allow for expressing upper quotas. The goal of the upper quotas can easily be met by setting lower quotas on the complement of the set of types. 3 specifies the lower quota bound for each type. A diverse committee selection instance can be summarized as (C, ≿, T, τ, q) where – C = {c1, . . . , cm} is the set of candidates. – The weak order ≿over C is the priority order over the candidates. – T = {t1, t2, ..., tℓ} is the set of types. We will use t to refer to some generic type in T. – τ is a matrix consisting of each candidate c’s type vector τc where – τc is a type row vector of candidate c consisting of 1’s and 0’s – τt c = 1 if c belongs to type t and τt c = 0 otherwise. – q is a vector consisting of all type-specific lower bounds. The value qt denotes the lower bound for type t.3 We will denote the set of all types that a candidate c belongs to by η(c). For c, d ∈C, if c ≿d but d c, we will write the strict part of the relation as c ≻d. Note that the model is powerful enough to capture the following kind of lower bounds: “there should be at least x members who are of one of the types from set S ⊂T.” In that case, one can create an ‘artificial’ type tS such that for any c ∈C, τtS c = 1 if τt c = 1 for some t ∈S . For a committee W ⊂C, we will denote P c∈W τc by τW. We will denote the number of candidates of type t in W by τW(t). For some committee W ⊂C, if τW(t) < qt, we will say that type t is under-represented in W. The linear ranking over C could be based on some objective measure that reflects the global quality of the candidate such as entrance examination scores. It could also be based on the aggregate scores based on some positional scoring voting done by voters who vote on the candidates (Brams and Potthoff, 1990; Bredereck et al., 2018). The goal in the committee selection problem is to select a target number of can-didates. We will denote the target size by k. Example 1 Consider the following instance. – C = {c1, c2, c3, c4} – c1 ≻c2 ≻c3 ≻c4 – T = {t1, t2, t3, t4, t5} – τ =               1 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 1 1 0 0               – q =  0 1 2 1 0  Suppose the target committee size is two. There is no feasible committee that can satisfy the hard constraints. The committee {c3, c4} satisfies all the constraints except the one corresponding to t4. 3 Note that qt is not a vector. The underscore denotes that it is a lower bound. 4 3 Axioms for Diverse Committee Selection We formalize some axiomatic properties that are desirable in our context. A commit-tee W satisfies type distribution (x1, . . . , xℓ) if for each i, it has at least xi members of type ti. Definition 1 (domination between type distributions) A type distribution x = (x1, . . . , xℓ) weakly dominates another type distribution y = (y1, . . . , yℓ) if (i) for each ti such that yi ≥qi, we also have xi ≥qi, and (ii) for each ti such that yi < qi, either xi ≥qi or |xi −qi| ≤|yi −qi| When x weakly dominates y, we denote it by x ≥y. We say x dominates y if x ≥y but y ≱x. Next, we observe the following lemma regarding the transitivity of the weak dom-ination relation between type distributions. Lemma 1 If τX ≥τY and τY ≥τZ, then τX ≥τZ. Proof. By the definition of weak domination, if some type ti is not under-represented in Z, then it also not under-represented in Y. If ti is under-represented in Z, then its representation is at least as much in Y. By the same reasoning, if some type is not under-represented in Y, then it also not under-represented in X. If ti is under-represented in Y, then its representation is at least as much in X. Hence we obtain the following implications. (i) if some type ti is not under-represented in Z, then it also not under-represented in X, and (ii) if ti is under-represented in Z, then its representation is at least as much in X. By the definition of domination between types, we note that τX ≥τZ. ⊓ ⊔ Based on the notion of dominance between type distributions, we are now in a po-sition to define type optimality of a committee. Roughly speaking, a committee W is type optimal if there exists no other committee obtained by swapping two candidates whose type distribution dominates that of W. Definition 2 (Type optimal) A committee W is type optimal if there exists no can-didate c′ ∈W and c < W such that τ((W{c′})∪{c}) dominates τW. We note that type optimality is desirable in terms of distributional constraints but does not take into account the excellence of the candidates. Type optimality has been defined in a local sense based on swaps of candidates. If we define it in a global sense by allowing swaps of subsets of candidates with subsets of candidates, then checking whether a given type distribution is optimal is NP-complete. We now present an axiom that avoids scenarios where a candidate may feel that she deserves the place of a lesser ranked candidate. The axiom is adapted from the literature on stable matching with distributional constraints (Kurata et al., 2017; Goto et al., 2017; Kojima et al., 2014; Ehlers et al., 2014; Kamada and Kojima, 2015). The intuition behind the axiom is that a candidate c < W has justified envy towards c′ ∈W if c has higher priority than c′, and replacing c′ with c will not make some type ti reaching the desired quota to being under-represented, or from being under-represented to being even more under-represented. 5 Definition 3 (Justified envy-freeness) A committee W satisfies justified envy-freeness if there are no candidates c < W and c′ ∈W such that c ≻c′ and there exists no type ti ∈η(c′) \ η(c) such that the number of candidates in W of type ti is less than or equal to qi. We note that justified envy-freeness by itself can be trivially satisfied by a com-mittee that selects the top k ranked candidates. Such a committee is score-optimal, i.e., maximizes the total score, if there were points associated with ranks of ordi-nal ranks in the priority list. However, such a committee may not respect any of the distributional constraints. We have identified justified envy-freeness and type optimality as two desirable axioms for our setting. The two axioms are necessarily satisfied by any committee that is score-optimal and meets the hard distributional constraints. If a committee is not type optimal, then it does not satisfy the distributional constraints. If it does not satisfy justified envy-freeness, then a swap of two candidates can increase the total points of the committee without violating its hard constraints which means that the committee was not score-optimal subject to the constraints. In the next section, we present an algorithm that returns a committee satisfying both axioms. 4 A Rule for Diverse Committee Selection We are in a position to present our algorithm (Algorithm 1) to find a diverse com-mittee. In the first stage (Steps 1 to 7), the algorithm checks if there is a type that is under-represented and then selects the highest priority candidate who satisfies such a type. The first stage is along similar lines as the Greedy Algorithm I of Bredereck et al. (2018). If no type is under-represented, then the algorithm adds the required number of highest ranking candidates. The second stage (Steps 8 to 10) is geared towards obtaining a good type distribution. If the committee does not satisfy type optimality, candidates are exchanged with the goal to satisfy type optimality until it is satisfied. In the final while loop (Steps 11 to 13), the algorithm allows swaps of candidates if there is justified envy. The algorithm stops when the committee satis-fies justified envy-freeness. In certain steps of the algorithm, there may be multiple options for selecting or choosing candidates to be included or excluded from the com-mittee. In this sense the algorithm represents a class of rules rather than a single rule. The exact choice can be established by choosing any tie-breaking rule. We have pre-sented the algorithm without specifying the tie-breaking rule because the axiomatic properties satisfied by the algorithm are not dependent on tie-breaking. Let us illustrate the algorithm. Example 2 Consider the following instance. – C = {c1, c2, c3, c4} – c1 ≻c2 ≻c3 ≻c4 – T = {t1, t2, t3, t4, t5} 6 Algorithm 1 Rule for finding a desirable committee satisfying soft distributional constraints on types Input: (C, ≿, T, τ, q). Output: W ⊆C such that |W| = k 1: W ←−∅ 2: while |W| < k and there exists some candidate in C \ W of some type t that is underrepresented in W do 3: Add a highest priority candidate of that type t to W. 4: end while 5: if |W| < k then 6: Select highest ranked k −|W| candidates from C \ W into W. 7: end if 8: while there is a candidate c < W and candidate c′ ∈W such that τ(W{c′})∪{c} dominates τW do 9: W ←−(W \ {c′}) ∪{c} 10: end while 11: while there is a candidate c < W who has justified envy for some candidate c′ ∈W do 12: W ←−(W \ {c′}) ∪{c} 13: end while 14: return W – τ =               1 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 1 1 0 0               – q =  0 1 2 1 0  Suppose the target committee size is 2. The algorithm first considers a type t2 that is under-represented and selects c2 because c2 ≻c4. It then considers type t3 that is under-represented and selects c3. At this point the type distribution is not optimal and can be improved by the exchange of c2 with c4 so that W = {c3, c4}. At this point the committee is type optimal and also satisfies justified envy-freeness. Proposition 1 Algorithm 1 returns in time O(|C|3) a committee that satisfies justified envy-freeness. Proof. Note that by Step 7, we already have a committee of size k. The first while loop takes time at most O(k|C|). In the second while loop, the type distribution keeps improving since the type domination relation is transitive (Lemma 1), but there can be at most |C|2 such improvements. The while loop requires the scan of at most |C| candidates so the time taken for the second while loop is at O(|C|3). Finally, in the last while loop, with each swap of candidates, a candidate is replaced by a candidate with a higher ranking. This can happen at most O(|C|2) times. The while loop requires the scan of at most |C| candidates so the time taken for the last while loop is at O(|C|3). 7 The while loop terminates when no candidate has justified envy for some candidate in W. ⊓ ⊔ Although Algorithm 1 finds a type optimal committee by Step 10, the committee undergoes further changes in the final while loop to achieve justified envy-freeness. We now argue that the returned committee still satisfies type optimality. Proposition 2 Algorithm 1 returns a committee satisfying type optimality. Proof. In the second while loop, we start from a committee of size k. Due to the argument in the proof of Proposition 1, we know that by the end of Step 10, W is a committee that is type optimal. We show that in the final while loop, W remains type optimal. In the final while loop, we implement swaps if there is a candidate c < W who has justified envy for some candidate c′ ∈W. Suppose we swap c′ with c. We claim that (W \ {c′}) ∪{c} is also type optimal. Since c had justified envy against c′, by definition of justified envy, there exists no type ti ∈η(c′) \ η(c) such that the number of candidates in W of type ti is less than or equal to qi. If there is a type ti such that τW(ti) ≥qi, then τ(W{c′})∪{c})(ti) ≥qi. In words, if a type is not under-represented in W, it is not under-represented in (W \ {c′}) ∪{c}. If there is a type ti such that τW(ti) < qi, then τW(ti) ≤τ(W{c′})∪{c})(ti) ≤qi. In words, if a type is under-represented in W, it is at most as under-represented in (W \ {c′}) ∪{c}. Thus we have established that τ(W{c′})∪{c}) ≥τW. Since W was type optimal, (W \ {c′}) ∪{c} is type optimal as well. ⊓ ⊔ We have shown that our algorithm simultaneously satisfies both type optimality and justified envy-freeness. Since these are two basic properties satisfied by opti-mal committees satisfying hard constraints, our algorithm provides a computation-ally easy and principled rule to find desirable committees that ‘almost’ satisfy the distributional constraints. It will be interesting to see if other desirable axioms can be simultaneously satisfied in conjunction with the ones which our algorithm satis-fies. Our cubic-time algorithm may not return a committee satisfying the distribu-tional constraints even if such a committee exists. However this may not be viewed as criticism of the algorithm since the problem of checking whether such a commit-tee exists is NP-complete, and a polynomial-time algorithm is unlikely to exist unless P-NP (Papadimitriou, 1994). Our general algorithm can have more precise specifications that prioritise cer-tain types in a lexicographical manner or implement swaps according to some pre-determined pattern. The ≿ranking order can be derived by using some social welfare function for a set of voters voting on the quality of the candidates. Finally, it will be interesting to undertake a more experimental comparison of our rule with other methods proposed in the literature. 8 References H. Aziz, J. Lang, and J. Monnot. Computing Pareto Optimal Committees. In Proceed-ings of the 25th International Joint Conference on Artificial Intelligence (IJCAI), pages 60–66, 2016. H. Aziz, M. Brill, V. Conitzer, E. Elkind, R. Freeman, and T. Walsh. Justified repre-sentation in approval-based committee voting. Social Choice and Welfare, 48(2): 461–485, 2017a. H. Aziz, E. Elkind, P. Faliszewski, M. Lackner, and P. Skowron:. The Condorcet principle for multiwinner elections: From shortlisting to proportionality. In proc 26th ijcai, pages 84–90, 2017b. S. J. Brams and R. F. Potthoff. Constrained approval voting: A voting system to elect a governing board. Interfaces, 20(2):67–80, 1990. R. Bredereck, P. Faliszewski, A. Igarashi, M. Lackner, and P. Skowron. Multiwinner elections with diversity constraints. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI). AAAI Press, 2018. L. E. Celis, L. Huang, and N. K. Vishnoi. Group fairness in multiwinner voting. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI), pages 144–151, 2018. L. Ehlers, I. E. Hafalir, M. B. Yenmezb, and M. A. Yildirimc. School choice with con-trolled choice constraints: Hard bounds versus soft bounds. Journal of Economic Theory, 153:648—683, 2014. P. Faliszewski, P. Skowron, A. Slinko, and N. Talmon. Multiwinner voting: A new challenge for social choice theory. In U. Endriss, editor, Trends in Computational Social Choice, chapter 2. 2017. M. Goto, F. Kojima, R. Kurata, A. Tamura, and M. Yokoo. Designing matching mechanisms under general distributional constraints. American Economic Journal: Microeconomics, 9(2):226–62, 2017. R. Izsak, N. Talmon, and G. Woeginger. Committee selection with intraclass and interclass synergies. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI). AAAI Press, 2018. Y. Kamada and F. Kojima. Efficient matching under distributional constraints: Theory and applications. The American Economic Review, 105(1):67–99, 2015. F. Kojima, A. Tamura, and M. Yokoo. Designing matching mechanisms under con-straints: An approach from discrete convex analysis. 2014. C-C. Kuo, F. Glover, and K. S. Dhir. Analyzing and modeling the maximum diversity problem by zero-one programming. Decision Sciences, 24(6):1171–1185, 1993. R. Kurata, N. Hamada, A. Iwasaki, and M. Yokoo. Controlled school choice with soft bounds and overlapping types. Journal of Artificial Intelligence Research, 58: 153–184, 2017. J. Lang and P. Skowron. Multi-attribute proportional representation. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI), pages 530–536. AAAI Conference on Artificial Intelligence (AAAI), 2016. C. H. Papadimitriou. Computational Complexity. Addison-Wesley, 1994. R. Potthoff. Use of linear programming for constrained approval voting. Interfaces, 20(5):79–80, 1990. 9 T. C. Ratliff. Selecting committees. Public Choice, 126(3/4):343–355, 2006. A. Straszak, M. Libura, J. Sikorski, and D. Wagner. Computer-assisted constrained approval voting. Group Decision and Negotiation, 2(4):375–385, 1993.
188805
https://www.cs.columbia.edu/~tal/3261/sp17/flora_lecture_notes.pdf
COMS W3261: Computer Science Theory April 12, 2017 Lecture (4/11): Mapping Reductions Instructor: Tal Malkin Lecture given by: Flora Min Jung Park (mp3369) 1 Understanding Mapping Reductions 1.1 Recap Definition 1. Language A is mapping reducible to language B (A ≤m B) if there is a computable function f: Σ∗→Σ∗, where for every w, w ∈A ⇔f(w) ∈B Example 2. Let’s consider the following languages A, B. A = {strings in {a, b}∗with amount of a’s = amount of b’s}, B = {anbn : n ≥0}. We can construct our mapping f to be, for example, f(w): on input w, sorting w by the symbols (and thus putting all a’s before b’s). - for any string w, f(w) ∈B if and only if w ∈A - note that this example is onto (thought it doesn’t have to be), but not one-to-one 1.2 Mapping Reduction in relation to Turing Reduction Theorem 3. If A ≤m B, then A ≤T B Proof. Let f be the mapping, and DB the decider for L(B). Then, we can construct a decider DA for A on input w: DA on input w: • compute y = f(w) • run DB(y) and output the same Fact 4. On the other hand, if A ≤T B, then this does not necessarily mean that A ≤m B. However, it does in the following special case: check if the Turing reduction uses the Decider once, and always outputs same (no flipping of accept/reject). In this case, the Turing reduction implies Mapping reduction. Theorem 5. If A ≤T B, and this reduction calls the decider for B, DB exactly once and outputs same. Then, A ≤M B. 1 Proof. Let A ≤T B via a reduction that constructs a decider DA for A, by calling a decider DB for B exactly once and outputting the same thing as DB. We define f(w) as the input that DB is invoked on when DA starts with input w. This f is clearly computable, as this is exactly what DA computes on input w, before calling DB on f(w). It satisfies that w is in A if and only if f(w) is in B, because of the fact that the output of DA on w is the same as the output of DB on f(w) . 1.3 Mapping Reduction Properties Theorem 6. If A ≤m B and B is recognizable, then A is also recognizable. Proof. Let f be the mapping, and MB the recognizer for B. We can construct a recognizer MA on input w: MA on input w: • compute y = f(w) we are using the fact that f is a computable function (as part of the def. of mapping reduction) • run MB(y) • if it accepts, accept. • if it rejects, reject. Analysis: Note that since f is a computable function, the first step is computable in finite time. if w ∈A ⇒y = f(w) ∈B ⇒MB(y) accepts ⇒MA(w) accepts if w / ∈A ⇒y = f(w) / ∈B ⇒MB(y) rejects or runs forever ⇒MA(w) rejects or loops forever Theorem 7. If A ≤m B iffA ≤m B. Proof. Prove both ways for equivalence: →: If A ≤m B, then A ≤m B Let’s assume A ≤m B, and the mapping for this reduction to be f. We know that our mapping f satisfies w ∈A ⇔f(w) ∈B. This is equivalent to w / ∈A ⇔f(w) / ∈B for all mapping (will always answer yes/yes, no/no). Thus we can use this same f to map A ≤m B. ←: If A ≤m B, then A ≤m B: Follows from the above, by noticing that A = A and B = B. 2 Some other possible exercises: Corollary 8. If A ≤m B and A is not recognizable, then B is not recognizable. Proof. Same reasoning with same f construction (understand it as contrapositive of Theorem 7). Theorem 9. If A ≤m B and B is decidable, then A is also decidable. Corollary 10. If A ≤m B and A is not decidable, then B is not decidable. Corollary 11. If A ≤m B, and A is not recognizable (A is not co-recognizable), then B is not recognizable. Fact 12. Note that for Turing reductions you can add complements arbitrarily. For mapping reduction you can only add complement to both sides, not just one at a time – otherwise it may no longer be true. Example 13. EQTM = {⟨M1, M2⟩: M1, M2 are TMs and L(M1) = L(M2)} is neither recognizable nor co-recognizable. Claim 14. EQTM is not recognizable. Proof. By the previous lecture, we have seen ETM ≤T EQTM. This reduction is actually a mapping reduction where our f (mapping) corresponds to f(⟨M⟩): ⟨M∅, M⟩, where M∅is the TM that rejects all inputs (This is a sufficient explanation because we already know ETM is not recognizable). This is mapping reduction because: f is a computable function, since it involves outputting the en-coding of M∅(which can be hard coded into our algorithm), followed by the encoding M, which is just copying of the input. ⟨M⟩∈ETM: ⇔M is a TM and L(M) = ∅= L(M∅) ⇔⟨M∅, M⟩∈EQTM Claim 15. EQTM is not co-recognizable. (i.e. EQTM is not recognizable) Proof. Let’s prove that ATM ≤m EQTM (or equivalently, ATM ≤m EQTM). This is sufficient because we know that ATM is not recognizable (we proved in class that ATM is non-decidable but recognizable). We can construct an f such that: f(⟨M, w⟩) = ⟨M1, M2⟩where M1 accepts all inputs, and M2 for any input x, runs M on w and if it accepts, accept x. We can easily deduct that this f is computable as well. Analysis If ⟨M, w⟩∈ATM then M accepts w, so M2 will accepts all inputs x, namely L(M2) = Σ∗= L(M1). therefore, ⟨M1, M2⟩∈EQTM If ⟨M, w⟩/ ∈ATM then M does not accept w, so M2 does not accept any input x, namely L(M2) = ∅̸= L(M2). therefore, ⟨M1, M2⟩/ ∈EQTM 3 Example 16. Revisiting the proof of Rice’s Theorem By complementing both sides and recalling that ATM is not recognizable, we get the following revined version of the Rice Theorem. For proving the Rice Theorem, we proved that for any non-trivial language property P, if ∅does not satisfy the property then we showed a mapping reduction from ATM ≤m P, and if ∅does satisfy the property, then it was a mapping reduction from ATM ≤m P. Theorem 17. Rice’s Theorem For any P, a non-trivial recognizable language property, if ∅satisfies P then P is not recognizable, and if ∅does not satisfy P then P is not recognizable. In either case, P is undecidable. Example 18. CLTTM = {⟨M⟩: where M is a TM and L(M) is a context free language} Deduct that CLTTM is non recognizable with the Refined Version of the Rice’s Theorem. Proof. We show that CLTTM is a non-trivial property of TM languages, and that ∅satisfies the property. Thus, using the refined version of Rice’s theorem above, we can conclude CLTTM is not recognizable. To show that it is not trivial: there exist a TM in CLTTM, e.g. take M∅which rejects all inputs. < M∅>∈CLTTM because ∅is a context free language. There also exists a TM not in CLTTM, e.g take a TM T that accepts all strings of the form anbncn and rejects all other strings. ⟨T⟩not in CLTTM because L(T) = {anbncn : n ≥0} is not a CFL. To show that it’s a language property, note that for any two TMs M1, M2 with L(M1)=L(M2), either this language is CFL, and then both ⟨M1⟩, ⟨M2⟩∈CFLTM, or this language is not a CFL and then both ⟨M1⟩, ⟨M2⟩/ ∈CFLTM. In any case, ⟨M1⟩∈CFLTM ⇔< M2 >∈CFLTM. Finally, ∅satisfies the property since, as we already mentioned above, ∅is a CFL. 4
188806
https://artofproblemsolving.com/wiki/index.php/Series?srsltid=AfmBOopFTdICpu_brAGcgykbhp7WTf4pNRgwYGn8rKsUj_ZIzm1HqLki
Art of Problem Solving Series - AoPS Wiki Art of Problem Solving AoPS Online Math texts, online classes, and more for students in grades 5-12. Visit AoPS Online ‚ Books for Grades 5-12Online Courses Beast Academy Engaging math books and online learning for students ages 6-13. Visit Beast Academy ‚ Books for Ages 6-13Beast Academy Online AoPS Academy Small live classes for advanced math and language arts learners in grades 2-12. Visit AoPS Academy ‚ Find a Physical CampusVisit the Virtual Campus Sign In Register online school Class ScheduleRecommendationsOlympiad CoursesFree Sessions books tore AoPS CurriculumBeast AcademyOnline BooksRecommendationsOther Books & GearAll ProductsGift Certificates community ForumsContestsSearchHelp resources math training & toolsAlcumusVideosFor the Win!MATHCOUNTS TrainerAoPS Practice ContestsAoPS WikiLaTeX TeXeRMIT PRIMES/CrowdMathKeep LearningAll Ten contests on aopsPractice Math ContestsUSABO newsAoPS BlogWebinars view all 0 Sign In Register AoPS Wiki ResourcesAops Wiki Series Page ArticleDiscussionView sourceHistory Toolbox Recent changesRandom pageHelpWhat links hereSpecial pages Search Series A series is a sum of consecutive terms in a sequence. Common series are based on common sequences. Common Series Arithmetic series Arithmetico-geometric series Geometric series Harmonic series Polynomial geometric series Telescoping series See also Sequence Convergence Divergence This article is a stub. Help us out by expanding it. Retrieved from " Categories: Stubs Definition Art of Problem Solving is an ACS WASC Accredited School aops programs AoPS Online Beast Academy AoPS Academy About About AoPS Our Team Our History Jobs AoPS Blog Site Info Terms Privacy Contact Us follow us Subscribe for news and updates © 2025 AoPS Incorporated © 2025 Art of Problem Solving About Us•Contact Us•Terms•Privacy Copyright © 2025 Art of Problem Solving Something appears to not have loaded correctly. Click to refresh.
188807
https://artofproblemsolving.com/wiki/index.php/Principle_of_Inclusion-Exclusion?srsltid=AfmBOoowuR1hzm8hVAdDQY1LkuxX9h9BRhPRpBfUp8D0Rqc4VcS3PMzQ
Art of Problem Solving Principle of Inclusion-Exclusion - AoPS Wiki Art of Problem Solving AoPS Online Math texts, online classes, and more for students in grades 5-12. Visit AoPS Online ‚ Books for Grades 5-12Online Courses Beast Academy Engaging math books and online learning for students ages 6-13. Visit Beast Academy ‚ Books for Ages 6-13Beast Academy Online AoPS Academy Small live classes for advanced math and language arts learners in grades 2-12. Visit AoPS Academy ‚ Find a Physical CampusVisit the Virtual Campus Sign In Register online school Class ScheduleRecommendationsOlympiad CoursesFree Sessions books tore AoPS CurriculumBeast AcademyOnline BooksRecommendationsOther Books & GearAll ProductsGift Certificates community ForumsContestsSearchHelp resources math training & toolsAlcumusVideosFor the Win!MATHCOUNTS TrainerAoPS Practice ContestsAoPS WikiLaTeX TeXeRMIT PRIMES/CrowdMathKeep LearningAll Ten contests on aopsPractice Math ContestsUSABO newsAoPS BlogWebinars view all 0 Sign In Register AoPS Wiki ResourcesAops Wiki Principle of Inclusion-Exclusion Page ArticleDiscussionView sourceHistory Toolbox Recent changesRandom pageHelpWhat links hereSpecial pages Search Principle of Inclusion-Exclusion The Principle of Inclusion-Exclusion (abbreviated PIE) provides an organized method/formula to find the number of elements in the union of a given group of sets, the size of each set, and the size of all possible intersections among the sets. Contents 1 Important Note(!) 2 Application 2.1 Two Set Example 2.2 Three Set Example 2.3 Four Set Example 2.3.1 Problem 2.3.2 Solution 2.4 Five Set Example 2.4.1 Problem 2.4.2 Solution 3 Statement 4 Proof 5 Remarks 6 Examples 7 See also Important Note(!) When using PIE, one should understand how to strategically overcount and undercount, in the end making sure every element is counted once and only once. In particular, memorizing a formula for PIE is a bad idea for problem solving. Application Here, we will illustrate how PIE is applied with various numbers of sets. Two Set Example Assume we are given the sizes of two sets, and , and the size of their intersection, . We wish to find the size of their union, . To find the union, we can add and . In doing so, we know we have counted everything in at least once. However, some things were counted twice. The elements that were counted twice are precisely those in . Thus, we have that: . Three Set Example Assume we are given the sizes of three sets, and , the size of their pairwise intersections, , and , and the size their overall intersection, . We wish to find the size of their union, . Just like in the Two Set Example, we start with the sum of the sizes of the individual sets . We have counted the elements which are in exactly one of the original three sets once, but we've obviously counted other things twice, and even other things thrice! To account for the elements that are in two of the three sets, we first subtract out . Now we have correctly accounted for them since we counted them twice originally, and just subtracted them out once. However, the elements that are in all three sets were originally counted three times and then subtracted out three times. We have to add back in . Putting this all together gives: . Four Set Example Problem Six people of different heights are getting in line to buy donuts. Compute the number of ways they can arrange themselves in line such that no three consecutive people are in increasing order of height, from front to back. (2015 ARML I10) Solution Let be the event that the first, second, and third people are in ordered height, be the event that the second, third, and fourth people are in ordered height, be the event that the third, fourth, and fifth people are in ordered height, and be the event that the fourth, fifth and sixth people are in ordered height. By a combination of complementary counting and PIE, we have that our answer will be . Now for the daunting task of evaluating all of this. For , we just choose people and there is only one way to put them in order, then ways to order the other three guys for . Same goes for , , and . Now, for , that's just putting four guys in order. By the same logic as above, this is . Again, would be putting five guys in order, so . is just choosing guys out of , then guys out of for . Now, is just the same as , so , is so , and is so . Moving on to the next set: is the same as which is , is ordering everybody so , is again ordering everybody which is , and is the same as so . Finally, is ordering everybody so . Now, lets substitute everything back in. We get a massive expression of . Five Set Example Problem There are five courses at my school. Students take the classes as follows: 243 take algebra. 323 take language arts. 143 take social studies. 241 take biology. 300 take history. 213 take algebra and language arts. 264 take algebra and social studies. 144 take algebra and biology. 121 take algebra and history. 111 take language arts and social studies. 90 take language arts and biology. 80 take language arts and history. 60 take social studies and biology. 70 take social studies and history. 60 take biology and history. 50 take algebra, language arts, and social studies. 50 take algebra, language arts, and biology. 50 take algebra, language arts, and history. 50 take algebra, social studies, and biology. 50 take algebra, social studies, and history. 50 take algebra, biology, and history. 50 take language arts, social studies, and biology. 50 take language arts, social studies, and history. 50 take language arts, biology, and history. 50 take social studies, biology, and history. 20 take algebra, language arts, social studies, and biology. 15 take algebra, language arts, social studies, and history. 15 take algebra, language arts, biology, and history. 10 take algebra, social studies, biology, and history. 10 take language arts, social studies, biology, and history. 5 take all five. None take none. How many people are in my school? Solution Let A be the subset of students who take Algebra, L-languages, S-Social Studies, B-biology, H-history, M-the set of all students. We have: Thus, there are people in my school. Statement If are finite sets, then: . Proof We prove that each element is counted once. Say that some element is in sets. Without loss of generality, these sets are We proceed by induction. This is obvious for If this is true for we prove this is true for For every set of sets not containing with size there is a set of sets containing with size In PIE, the sum of how many times these sets are counted is There is also one additional set of sets so is counted exactly once. Remarks Sometimes it is also useful to know that, if you take into account only the first sums on the right, then you will get an overestimate if is odd and an underestimate if is even. So, , , , and so on. Examples 2011 AMC 8 Problems/Problem 6 2017 AMC 10B Problems/Problem 13 2005 AMC 12A Problems/Problem 18 2001 AIME II Problems/Problem 9 2002 AIME I Problems/Problem 1 2020 AIME II Problems/Problem 9 2001 AIME II Problems/Problem 2 2017 AIME II Problems/Problem 1 See also Combinatorics Overcounting Retrieved from " Category: Combinatorics Art of Problem Solving is an ACS WASC Accredited School aops programs AoPS Online Beast Academy AoPS Academy About About AoPS Our Team Our History Jobs AoPS Blog Site Info Terms Privacy Contact Us follow us Subscribe for news and updates © 2025 AoPS Incorporated © 2025 Art of Problem Solving About Us•Contact Us•Terms•Privacy Copyright © 2025 Art of Problem Solving Something appears to not have loaded correctly. Click to refresh.
188808
https://oeis.org/A107761/internal
A107761 - OEIS login The OEIS is supported by the many generous donors to the OEIS Foundation. Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!) A107761 Number of permutations of (1,3,5,7,9,...,2n-1) where every adjacent pair in the permutation are coprime. 2 %I #18 Sep 30 2015 10:47:41 %S 1,2,6,24,72,480,3600,9600,108000,1270080,4795200,74088000,768539520, %T 4759413120,94182359040,1893397524480,11353661706240,122634632171520, %U 3104438623534080,23063946114908160,664424069072117760 %N Number of permutations of (1,3,5,7,9,...,2n-1) where every adjacent pair in the permutation are coprime. %C Odd analog of A076220. %e For example, if n = 5, the permutation (5,3,7,9,1) is counted, but (5,3,9,1,7) is not counted because 3 and 9 are adjacent. %t With[{n=9}, per=Permutations[Range[1, 2 n -1, 2]]; Select[per, Times @@ Table[GCD @@Partition[ #, 2, 1], {i, n-1}]==1&]//Length] (Seidov) %Y Cf. A076220, A086595, A102381, A107762, A107763. %K nonn %O 1,2 %A Ray Chandler, following a suggestion of Leroy Quet, Jun 11 2005 %E a(1)-a(9) computed by Zak Seidov. %E More terms from Max Alekseyev, Jun 11 2005 LookupWelcomeWikiRegisterMusicPlot 2DemosIndexWebCamContributeFormatStyle SheetTransformsSuperseekerRecents The OEIS Community Maintained by The OEIS Foundation Inc. Last modified September 28 22:08 EDT 2025. Contains 388818 sequences. License Agreements, Terms of Use, Privacy Policy
188809
https://www.translatorscafe.com/unit-converter/en-US/sound/3-2/neper-decibel/
Convert neper [Np] to decibel [dB] • Sound Level Converter • Acoustics — Sound • Compact Calculator • Online Unit Converters Unit Converter ============== Convert units of measurement easily! x TranslatorsCafe.com Online Unit Converters Common Mechanics Heat Fluids Sound Light Electrical Magnetism Radiation Misc Calculators Mobile View English (United States) Random converterConvert neper [Np] to decibel [dB] 1 neper [Np] = 8.68588963804257 decibel [dB] From: To: Decibel to Power and Amplitude Ratio Converter The power and amplitude ratio must be a positive number. dB, decibel? Power Ratio Amplitude Ratio Optical Power and Magnification Did you know that, unlike older people, 3-year old children do not need magnifying glass because at this age children can view small objects clearly at distances less than 5 cm? Click or tap to find out more about the optical power and magnification! More about Sound Level Some sounds, such as music, are very pleasant and make people happy. Toronto Santa Claus Parade, 2010. Overview dB to Ratio Example Conversion Table Music Noise Pollution Sources Wind Turbines Trains Airplanes Automobiles Construction and Maintenance Household and Other Noise Legislation Measuring Sound Levels Sound Level Meters Noise Dosimeters Mitigating Noise Passive Noise Cancellation Active Noise Cancellation Protective Gear Maintenance Overview Sound level is used in acoustics, the science that studies the properties of sound. It measures the intensity of sound. In everyday use, it corresponds to volume. While some sounds cause discomfort and even a range of negative psychological and physiological responses in people and animals, some sounds, such as music, the sound of the ocean, or bird songs, are very pleasant and make people happy. dB to Ratio Example Conversion Table | dB | Power Ratio | Amplitude Ratio | --- | 100 | 10,000,000,000 | | 100,000 | | | 90 | 1,000,000,000 | | 31,620 | | | 80 | 100,000,000 | | 10,000 | | | 70 | 10,000,000 | | 3,162 | | | 60 | 1,000,000 | | 1,000 | | | 50 | 100,000 | | 316 | 0.2 | | 40 | 10,000 | | 100 | | | 30 | 1,000 | | 31 | 0.62 | | 20 | 100 | | 10 | | | 10 | 10 | | 3 | 0.162 | | 3 | 1 | 0.995 | 1 | 0.413 | | 1 | 1 | 0.259 | 1 | 0.122 | | 0 | 1 | | 1 | | | –1 | 0 | 0.794 | 0 | 0.891 | | –3 | 0 | 0.501 | 0 | 0.708 | | –10 | 0 | 0.1 | 0 | 0.3162 | | –20 | 0 | 0.01 | 0 | 0.1 | | –30 | 0 | 0.001 | 0 | 0.03162 | | –40 | 0 | 0.0001 | 0 | 0.01 | | –50 | 0 | 0.00001 | 0 | 0.003162 | | –60 | 0 | 0.000001 | 0 | 0.001 | | –70 | 0 | 0.0000001 | 0 | 0.0003162 | | –80 | 0 | 0.00000001 | 0 | 0.0001 | | –90 | 0 | 0.000000001 | 0 | 0.00003162 | | –100 | 0 | 0.0000000001 | 0 | 0.00001 | Sound equipment. Rogers CityTV Studio. Toronto, Ontario, Canada. This table shows how the logarithmic scale can describe very big and very small numbers representing power, energy, or amplitude ratios with much shorter notation. Human ears are very sensitive. You can hear everything from a whisper at a 10-meter distance to the noise of jet engines. In terms of power, the sound of a firecracker can be 100,000,000,000,000 times more powerful than the faintest sound that the typical human ear can detect (20 micropascals). That’s a very big difference! Since the range of sound intensities that the human ear can detect is so huge, a logarithmic scale is used for measuring sound intensity. On the decibel scale, the faintest sound, also known as the threshold of hearing, is assigned a level of 0 decibels. A sound that is 10 times more intense is assigned a level of 20 decibels. A sound that is 100 times more intense is assigned a level of 30 decibels. The list below shows some common sounds and their decibel level: Hearing threshold — 0 dB Whisper at 1 m distance — 20 dB Quiet conversation at 1 m distance — 50 dB Powerful vacuum cleaner at 1 m distance — 80 dB Hearing loss threshold if exposure is long — 85 dB Portable media player at maximum volume — 100 dB Threshold of pain — 130 dB Fighter aircraft jet engine at 30 m distance — 150 dB Stun grenade M84 at 1.5 m distance — 170 dB Music The sound level of a single violin at a close distance is 82 to 92 dB. Los Angeles Children’s Orchestra performing at Central Park, New York, 2013. According to archeologists, music has been present in human life for as long as 50,000 years. It is found in every culture, and scientists believe that it is so widespread because it allows groups of people to bond. These bonds form between mothers and infants during the mother’s song, between people during traditional dancing and singing events, and during modern-day concerts. Most people like patterns, therefore music attracts us because it is structured and rhythmical. Noise Pollution Some of the noise is extremely unpleasant; it is often called noise pollution. This term refers to unwanted noise in the environment. It can be caused by a variety of sources and often results in discomfort, annoyance, and a range of unpleasant physiological and psychological responses, such as insomnia, fatigue, irregular blood pressure, damaged hearing when the noise pollution is severe, and other problems. Sources Noise pollution is often caused by transportation, in particular airplanes, trains, and cars. In the industrial zone machinery also contributes to pollution. Sometimes residents near wind turbines complain of the noise that the turbines produce. Construction can cause noise pollution as well. In some countries, many households keep dogs outdoors, often for security reasons. When these dogs start barking because of people and animals passing by or because other dogs are barking, they contribute to noise pollution. Finally, music from bars, restaurants, and individual homes can result in considerable loud noises. WindShare’s Ex Place turbine in Toronto, Ontario generates approximately 1 million kWh per year of clean wind power Wind Turbines Some of the wind energy watch organizations report that low-frequency noise that wind turbines generate causes sleep disturbances, headaches, and other symptoms that are uncomfortable enough to force some people to move away from their homes, that are located near wind turbines. Wind power enthusiasts claim that the problem is in people’s perception of how pollution affects them, not in the real negative effects that sound pollution has on people. Currently, there are not enough reliable longitudinal investigations, therefore it is hard to evaluate the problems with the sound pollution caused by the turbines. It is important to continue research in this area to ensure that wind energy is used in a way that does not harm people who live nearby. Trains Squeaky railway car disk brakes Due to the noise pollution caused by trains, designers that build tracks, as well as train parts manufacturers, are constantly working on improving the design, to minimize this pollution. One of the main noise sources is the sound caused by the undulations that happen when the wheels roll on the rail. The sound is called the rolling noise. This problem is studied by using simulations of the rail track and the wheel. By making the wheel and the rail vibrate less and with less intensity the noise can be reduced. Refining the construction of the breaks is one solution, and in recent years considerable improvements have been made to reduce noise by using new designs of some parts of the brake mechanism. A noise barrier between the train tracks and a residential area The way the tracks are constructed can impact this noise as well. Some of the solutions for limiting noise pollution include building barriers around the tracks, similar to those used around highways. Some unavoidable noise pollution comes from the warning systems that the train operators use. For example, train crossings and trains are equipped with horns and other sound notification systems to make the pedestrians and drivers aware that the train is coming. This is to ensure sufficient awareness even if the visibility is limited or the pedestrians are visually impaired. The Fouga Magister jet trainer flying in a residential area in Toronto, Ontario, Canada Airplanes Aircraft noise is caused mainly by operating the jet and turboprop engines. It is a problem for the passengers and crew inside the aircraft and for the people who live near airports. The noise in the cabin, while the aircraft is in the air, is about 80 decibels. To address this problem, some passengers wear active noise-canceling headphones, described below. Even in residential areas, many countries do not regulate the minimum height for the aircraft to fly or the maximum duration of time that they can stay above a given area. Generally, the airspace, which is open for flight of aircraft, can be used 24 hours a day. To address noise pollution by aircraft, governments often issue a list of recommendations on how to reduce noise pollution for companies and organizations in the aviation industry. New York traffic Automobiles Automobiles have long been a source of noise pollution, especially in urban areas. At higher speeds, the tires rolling against the road make the most noise. When drivers use winter tires in the summer or drive all-road vehicles on the highway, they increase the noise pollution because winter and all-road tires are designed to have higher friction so that they have a better grip with the road even in icy or off-road conditions. The increased friction contributes to the noise. When the cars move slowly, the engine is a greater source of noise. To minimize the noise pollution, manufacturers are working on creating quieter cars. At the same time cars are soundproofed, and new methods of active noise control are being designed to reduce the sounds inside of the car. This can be achieved by creating an opposite-phase wave for the sound wave that causes the noise. Active noise control, discussed below in more detail, is also used in other systems, such as in noise cancellation for headphones. Highly efficient highway glass noise barrier in Nagoya, Japan Highways and large roads often have protective walls that keep the sound within the area. Some of these walls work so well, that the cars are barely audible to an average person, listening on the other side of the wall. This depends on the wall design, however. Some sound barriers are only good for blocking the sound on the ground level and do not protect people who live in high-rises. Electric cars are much quieter than the gasoline ones, due to their design. It is a welcome improvement in the battle with noise pollution, but electric cars that are too quiet do not warn the pedestrians that the car is approaching; therefore, some noise is generated artificially to ensure road safety. Construction at the Clarkson GO Train Station parking lot in Mississauga, Ontario, Canada Construction and Maintenance Construction work and maintenance, including the engineering work at the railway tracks and roads, contribute to some noise pollution. Some places have to do this work outside of the regular business hours when the infrastructure systems under construction are not used as much by the public. Much of this noise is inevitable and difficult to regulate. In many countries, one requires a permit to start construction and may have to abide by rules such as not performing work at night or during the holidays. Household and Other Noise While it is difficult to regulate household noise through legislation, outside noise-producing activities are often restricted by by-laws. For example, in some areas, all or some types of fireworks are banned outside of special national holiday celebrations. The sound level maximum limit can also be set for fireworks. Many government bodies that monitor the noise pollution provide guidelines for residents on how to reduce their noise output. Suggestions include notifying the neighbors of upcoming noise-related events, performing noise-generating activities during waking hours of the majority of people, training dogs not to make noise, and placing noise-generating appliances away from the walls that may carry the sound to the neighbors. In some countries, it is acceptable to call the police if the noise from the neighbors’ activity is too disturbing. Some buildings, especially apartments, provide poor soundproofing, therefore it is important to check the space carefully before renting or purchasing it. Here are some of the things you can do: A noisy neighborhood in New York Ask a friend to pretend to be talking on the phone in the hallway, to test how well the sound travels inside. Examine the floor for creaking sounds. If the floor creaks, it is likely that the parts are loose and will produce a creaking sound. Plan your visit during the noisiest time of the day. This will depend on the neighborhood, and it may help to walk around the area before making an appointment to see the apartment so that you have an idea of when the area becomes noisy. If there is a school nearby, it could be in the morning when the children are going to class or in the afternoon when the classes are over; if there is a busy highway — it maybe during rush hour. Near a highway, it may also be very noisy in the early hours of the morning, when the trucks and cars are driving past and the rest of the area is too quiet to drown the noise. Checking the area outside of the building at night may help with identifying potential sources of loud noise, such as bars. A noisy neighborhood. Mississauga, Ontario, Canada. Choose a quiet neighborhood, away from schools and student housing, and select an apartment that faces away from the main street. Carefully check the layout of the building in which the apartment is located. When the surfaces shared with neighbors are minimal, the sound pollution is reduced. This is why corner top floor apartments are some of the most popular choices. If the living quarters are removed from the communal hallway, for example when the apartment has a long corridor, the noise from the outside is also reduced. Ask what material the building is made from. Concrete helps with reducing the noise. If, despite your careful checks you discover when you move in that the noise problem exists, you can reduce the amount of noise coming into your apartment by doing the following: Cover the floor and the walls with sound-absorbent materials, such as rugs, fabric wall-hangings, and tapestries. You can also install heavy curtains along the windows, extending to and covering the walls. It may improve the decor of the room as well. Noise is carried by solid objects through vibration, so minimizing it is key for reducing the sound that enters your apartment. Covering solid objects with fabrics such as tablecloths and choosing furniture with upholstery can help minimize vibrations of furniture. Heavy objects next to a wall may help reduce its vibration. Bookshelves with books next to a wall can be used, for example. A household ceiling fan generates white noise White noise often helps drown the outside sounds, so using a device that produces this noise may help. It could be a white noise machine or a simple household fan. House fountains and aquariums with the water filters that are similar to a fountain can be useful as well. Noise travels well through the openings; to prevent that it is important to insulate the unit and to make sure that the doors, especially at the main entrance fit tightly to the floor or the carpet. You can check all the places where water and other pipes enter your apartment for potential openings. This will also help prevent insects from coming in and smells from other apartments to travel into yours. If the main source of noise is the street, insulating the windows may help, but if you do, you will only be able to get fresh air through the ventilation system, not from the outside. Some modern units have very good ventilation and the windows are sealed, so you will not have the choice in the matter, but your sound insulation will likely be good. Commercial soundproofing solutions are also an option, for example, soundproofing foam and acoustic panels. Speaking to the neighbors about the noise they make often helps as well because many people do not realize how loud they are. Some property managers that rent out apartments require that the tenants cover their floors with carpeting, so it is possible to ask the management of the building to investigate whether the floor of the neighbors upstairs is, indeed, carpeted. Legislation Some countries try to regulate noise pollution through legislation; they often have fines associated with certain amounts of noise. In such cases, residents can usually file a complaint about noise with the appropriate regulatory body in their area, and this claim will be investigated. Some landlords also have rules prohibiting residents from making noise, such as playing musical instruments, during certain times. Restaurants and other establishments may have to apply for a license, which specifies how much noise is allowed and when. They may be only allowed to operate in certain areas of the city, and may also be required to soundproof their premises. Zoning, or restricting certain activities to a specified zone in the city during city planning, is also a way of limiting noise pollution. For example, the industrial zone with factories is often placed away from houses, hospitals, and educational establishments. Sound meter probe Measuring Sound Levels Sound levels are often measured to ensure that the levels are appropriate for the given purpose, for example, that microphones provide sufficient sound volume during an event. The levels are also measured to ensure that the noise in the environment is safe for the people, who have to work in this environment. Sound Level Meters If the sound is above 85 dB for a prolonged period of time, hearing can become impaired. However, the threshold of pain for most people starts from 115 dB and can be as high as 140. People do not perceive sound that is too loud as dangerous, thus it is important to measure sound in environments that expose people to potentially loud sounds for a prolonged time. Often sound level meters are used in this case. Most of them are portable and affordable for the general public. Personal noise dosimeter Noise Dosimeters Dosimeters are used when the total amount of exposure to loud sounds needs to be measured. This could be useful for people who work in noisy environments and need to know whether they have to wear protective gear. Dosimeters are useful if the noise levels fluctuate throughout the day. They are generally worn by the workers, but not everyone accepts their use in the industry. Some problems with using them include the ease to tamper with the device, potential interference with work, and dangers from the dosimeter’s cable getting caught in the machinery. Alternatively, sound level meters can be used instead to take measurements in different areas and at different times of the day, to create a sound map. This map allows one to estimate the daily sound level exposure for a given worker, based on his or her location. Newer dosimeters are designed to eliminate these problems by being small and unobtrusive, with no display, to discourage tampering with the data. Mitigating Noise It is paramount to reduce people’s exposure to noise pollution in the factories, the airline industry, and in other high-noise environments, to prevent hearing loss. Noise can disrupt the concentration of the workers or make it impossible to hear warnings and alarms, resulting in accidents. Noise is also mediated for people’s comfort. If a sound level meter is not available, it is a good idea to use protection when one cannot be heard without shouting. Noise can be controlled by blocking it and by canceling the incoming noise with the generated outgoing noise. The former is passive while the latter is active noise cancellation. Either method is chosen depending on the situation, but sometimes both can be combined. Several different types of passive noise cancellation protection can be combined as well. For example, the ground crew at the airports may wear both earplugs and earmuffs. Some places such as factories use sound baffles as well. They are made from an absorbent material and prevent the sound from reflecting off surfaces and amplifying. Passive Noise Cancellation In passive noise cancelation materials that absorb noise are used. Most of the suggestions above for reducing noise in apartments work on this principle. In headphones, such materials are foam and sponge. Noise-canceling headphones Active Noise Cancellation This type of noise cancellation helps reduce up to about 20 dB of outside noise. The principle behind active noise cancelation is to cancel all or parts of the incoming sound wave by a sound wave that is of the same amplitude and is in antiphase with the incoming wave. This emitted wave is created by the headphones. Airport worker wearing noise-canceling earmuffs. Toronto Pearson International Airport YYZ, Canada. This idea can be illustrated by imagining the swings. When one person is pushing a moving swing from the back, and then another person comes and starts pushing it in the front with the equal amplitude, and at the same time, their pushes will come in antiphase and the movement of the swing will be canceled, so it will stop. The noise cancellation equipment has to predict what the incoming sound would be, based on the sound that is prevalent in the environment, therefore this works well with the monotonous sound. If the noise changes constantly, then these headphones are less effective or do not work at all. The incoming sound is captured by the microphone, built into the headphones. Active noise cancellation technologies are also used in hearing protection aids for airport ground crew. Protective Gear Maintenance It is paramount to inspect personal protective gear such as earmuffs frequently, because wear in the foam and cracks in the plastic may allow unwanted sound. While the companies responsible for the workers have codes and regulations about maintaining the safety equipment, errors do occur so it is ultimately up to the worker to ensure personal safety. References This article was written by Kateryna Yuri Unit Converter articles were edited and illustrated by Anatoly Zolotkov You may be interested in other converters in the Acoustics — Sound group: Frequency and Wavelength Converter Pressure, Stress, Young’s Modulus Converter Conversion of Levels in dBm, dBV, Watts and Other Units Compact CalculatorFull CalculatorUnit definitions Online Unit ConvertersAcoustics — Sound Do you have difficulty translating a measurement unit into another language? Help is available! Post your question in TCTerms and you will get an answer from experienced technical translators in minutes. Acoustics — Sound Acoustics is the interdisciplinary science that deals with the study of all mechanical waves in gases, liquids, and solids including vibration, sound, ultrasound, and infrasound. Sound Level Converter Sound is a mechanical wave that is an oscillation of pressure transmitted through an elastic medium such as a solid body, liquid, or gas, composed of frequencies within the range of hearing at a level sufficiently strong to be heard. As the human ear can detect sounds with a very wide range of amplitudes, sound pressure is often measured as a level on a logarithmic scale. The decibel (dB) is a unit used to measure the intensity of a sound or the power or amplitude level of an electrical signal by comparing it with a given level on a logarithmic scale. In a broader definition, it is a logarithmic unit that indicates the ratio of a physical quantity relative to a specified or implied reference level. A ratio in decibels is ten times the logarithm to base 10 of the ratio of two power quantities. A decibel is one-tenth of a bel, a seldom-used unit. A change in power ratio by a factor of 100 is a 20 dB change. A 3 dB change is a change in power ratio approximately by a factor of two. The decibel is used for a wide variety of measurements in science and engineering, most prominently in electronics, acoustics, and control theory. In electronics and RF engineering, the decibel is often used to quantify signal-to-noise ratios, level, amplification, and attenuation of signals. The decibel is commonly used in acoustics to express sound levels relative to a 0 dB reference level, which has been defined as a sound pressure level of 20 micropascals. The reference level is set at the typical threshold of perception of an average human. Normally the ratio expressed in decibels in acoustics is a power ratio. Base 10 logarithms are used in the definition of decibel. An alternative logarithmic ratio unit, the neper, is sometimes used. The neper uses the natural logarithm (base e). Using the Sound Level Converter Converter This online unit converter allows quick and accurate conversion between many units of measure, from one system to another. The Unit Conversion page provides a solution for engineers, translators, and for anyone whose activities require working with quantities measured in different units. Learn Technical English with Our Videos! You can use this online converter to convert between several hundred units (including metric, British and American) in 76 categories, or several thousand pairs including acceleration, area, electrical, energy, force, length, light, mass, mass flow, density, specific volume, power, pressure, stress, temperature, time, torque, velocity, viscosity, volume and capacity, volume flow, and more. Note: Integers (numbers without a decimal period or exponent notation) are considered accurate up to 15 digits and the maximum number of digits after the decimal point is 10. In this calculator, E notation is used to represent numbers that are too small or too large. E notation is an alternative format of the scientific notation a · 10 x. For example: 1,103,000 = 1.103 · 10 6 = 1.103E+6. Here E (from exponent) represents “· 10^”, that is “times ten raised to the power of”. E-notation is commonly used in calculators and by scientists, mathematicians and engineers. Select the unit to convert from in the left box containing the list of units. Select the unit to convert to in the right box containing the list of units. Enter the value (for example, “15”) into the left From box. The result will appear in the Result box and in the To box. Alternatively, you can enter the value into the right To box and read the result of conversion in the From and Result boxes. We work hard to ensure that the results presented by TranslatorsCafe.com converters and calculators are correct. However, we do not guarantee that our converters and calculators are free of errors. All of the content is provided “as is”, without warranty of any kind. Terms and Conditions. If you have noticed an error in the text or calculations, or you need another converter, which you did not find here, please let us know! TranslatorsCafe.com Unit Converter YouTube channel TranslatorsCafe.com Online Unit Converters Calculators Site Map Mobile View Install Converter Site Language العربية English Français Deutsch Ελληνικά Italiano Lietuvių Русский Español Türkçe Recommend this converter: TwitterFacebookVK Active users: 1993 Terms and Conditions | Privacy Policy © ANVICA Software Development 2002—2025. TC UC Version: 2017-02-01
188810
https://www.studocu.com/en-us/document/yale-university/university-physics/kleppner-kolenkow-mechanics-solutions/4743682
Kleppner Kolenkow Mechanics Solutions - Solutions Manual to accompany AN INTRODUCTION TO MECHANICS - Studocu Skip to document Teachers University High School Discovery Sign in Welcome to Studocu Sign in to access study resources Sign in Register Guest user Add your university or school 0 followers 0 Uploads 0 upvotes New New HomeHome My LibraryMy Library AI NotesAI Notes Ask AIAsk AI AI QuizAI Quiz ChatsChats Recent Recent You don't have any recent items yet. My Library Courses Courses You don't have any courses yet. Add Courses Books Books You don't have any books yet. Studylists Studylists You don't have any Studylists yet. Create a Studylist Home My Library Discovery Discovery Universities High Schools High School Levels Teaching resources Lesson plan generator Test generator Live quiz generator Ask AI Kleppner Kolenkow Mechanics Solutions Course University Physics (PHYS 180) 11 documents Students shared 11 documents in this course University Yale University Academic year:2018/2019 Uploaded by: Will Sun Yale University 0 followers 1 Uploads94 upvotes Follow Comments Please sign in or register to post comments. E1 year ago Thank you :) Report Document Students also viewed International Security Notes Summary - Article: Betwixt And Between: The Liminal Period In Rites De Passage Summary, Article: Shakespeare In the Bush The Gift By Mauss Lecture Notes P1-46, 65-83 Power and Politics in Today's World Part 1 Rory Gilmore Book List Preview text Solutions Manual to accompany AN INTRODUCTION TO MECHANICS 2nd edition Version 1 November 2013 KLEPPNER KOLENKOW c Kleppner and Kolenkow 2013 1 Vector algebra 1 ˆ B (5 ˆi ˆj 2 k) ˆ A (2 ˆi 3 ˆj 7 k) (a) A B (2 5) ˆi 1) ˆj (7 2) kˆ 7 ˆi 2 ˆj 9 kˆ (b) A B (2 5) ˆi 1) ˆj(7 2) kˆ ˆi 4 ˆj 5 kˆ (c) A B (2)(5) (7)(2) 21 ˆi ˆj kˆ (d) A B 2 7 5 1 2 ˆi 31 ˆj 17 kˆ 1 Vector algebra 2 ˆ B (6 ˆi 7 ˆj 4 k) ˆ A (3 ˆi 2 ˆj 5 k) (a) A2 A A 32 52 38 (b) B2 B B 62 42 101 (c) (A B)2 14 522 2704 VECTORS AND KINEMATICS 3 A A x ˆi Ay ˆj Az kˆ A x A ˆi A cos (A, ˆi) A α α cos (A, ˆi) cos θ x . Similarly, Ay A cos (A, ˆj) A β β cos (A, ˆj) cos θy ˆ Aγ Az A cos (A, k) ˆ cos θz γ cos (A, k) Using these results, A2 A2x A2y A2z A2 (α2 β2 γ2 ) from which it follows that α2 β2 γ2 1 Another way to see this is A2 ρ2 A2z A2x A2y A2z A2 (α2 β2 γ2 ) and it follows as before that α2 β2 γ2 1. 1 Perpendicular vectors Given with A and B nonzero. Evaluate the magnitudes squaring. A2 2 A B B2 A2 2 A B B2 A B A B. and it follows that A B. 4 VECTORS AND KINEMATICS 1 Diagonals of a parallelogram The parallelogram is equilateral, so A B. D1 A B D2 B A D1 D2 (A B) (B A) A2 B2 0. Hence D1 D2 0 and it follows that D1 D2 . 1 Law of sines The area A of the triangle is 1 1 1 A h A B sin γ 2 2 2 Similarly, 1 1 A BC sin α 2 2 1 1 A AC sin β. 2 2 Hence AB sin γ BC sin α AC sin β, from which it follows sin γ sin α sin β C A B Introducing the cross product makes the notation convenient, and emphasizes the relation between the cross product and the area of the triangle, but it is not essential for the proof. 6 VECTORS AND KINEMATICS which can be written ˆ (2 ˆi ˆj k) ˆ C 6 Geometrically, C can be perpendicular to both A and B only if C is perpendicular to the plane determined A and B. From the standpoint of vector algebra, this implies that C A B. To prove this, evaluate A B. ˆi A B 1 2 ˆj kˆ 1 1 ˆi ˆj kˆ C. 1 Perpendicular unit vectors ˆ find a unit vector B ˆ perpendicular to A. Given A 3ˆi 4ˆj 4k, (a) B Bxˆi ˆj Bx ( A B Bx 4( 0 B Bx 3ˆ 4 To evaluate Bx , note that B is a unit vector, B2 1. !2 ! 2 3 25 2 1 Bx B 4 16 x which gives Bx ˆ 1 (4 ˆi 3 ˆj) B 5 continued next page VECTORS AND KINEMATICS 7 (b) C C x ˆi Cy ˆj Cz kˆ ˆ C x (Cy x ) ˆj (Cz x ) A C 0 C x 4(Cy x ) 4(Cz x 0 1 B C 0 C x 3(Cy x 0 5 Cy x Cz x To make C a unit vector, !2 !2 2 25 4 2 1 C C x 3 12 C x (c) The vector B C is perpendicular (normal) to the plane defined B and C, so we want to prove ˆi ˆj kˆ B C C x 45 53 0 25 1 43 12 ! ! ! 75 ˆ 100 ˆ 25 ˆ Cx k 60 60 15 ! 5 ˆ A. C x ˆi 4 ˆj 4 k) 12 1 Volume of a parallelepiped With reference to the sketch, the height is A cos α, so the frontal area is AB cos α. The depth is C sin β, so the volume V is V (AB cos α)(C sin β) (A cos α)(BC sin β) A (B C) The same approach can be used starting with a different face. V C (A B) V B (C A) Note that A, B, C are arbitrary vectors. This proves the vector identity A (B C) C (A B) B (C A) VECTORS AND KINEMATICS 9 1 Great circle Consider vectors R1 and R2 from the center of a sphere of radius R to points on the surface. To avoid complications, the sketch shows the geometry of a generic vector Ri (i 1 or 2) making angles λi and φi . The magnitude of Ri is R, so R1 R2 R. The coordinates of a point on the surface are Ri R cos λi cos φi ˆi R cos λi sin φi ˆj R sin λi kˆ The angle between two points can be found using the dot product. ! ! R1 R2 R1 R2 θ(1, 2) arccos arccos R1 R2 R2 Note that θ(1, 2) is in radians. The great circle distance between R1 and R2 is S Rθ(1, 2). R1 R2 R2 (cos λ1 cos φ1 cos λ2 cos φ2 cos λ1 sin φ1 cos λ2 sin φ2 sin λ1 sin λ2 ) Hence S R θ(1, 2) R arccos λ1 cos λ2 (cos φ1 cos φ2 sin φ1 sin φ2 ) sin λ1 sin λ2 ( ) 1 1 R arccos cos (λ1 λ2 ) cos (φ1 φ2 ) 1 cos (λ1 λ2 ) cos (φ1 φ2 ) 1 2 2 10 VECTORS AND KINEMATICS 1 Measuring g The motion is free fall with uniform acceleration, so the trajectory is a parabola, as shown in the sketch. Take the initial conditions at T to be z zA and v vA . The height z is then 1 z zA vA T gT 2 2 The height is again zA when T T A . 1 zA zA vA T A gT A2 2 so that 1 1 0 vA T A gT A2 vA gT A 2 2 the symmetry of the trajectory, the body reaches height zB for the second time at T 12 (T A T B ). h z B zA 1 2 1 1 1 2 zA vA (T A T B ) (T A T B zA vA T A gT A 2 2 2 2 ! ! 1 1 1 gT A (T A T B ) g(T A T B )2 2 2 8 1 g(T A2 T B2 ) 8 8h 2 T A T B2 12 VECTORS AND KINEMATICS 1 Relative velocity (a) rA rB R R vB vA R (b) R 2l sin (ωt) ˆi 2lω cos (ωt) ˆi R From the result of part (a) va vb 2lω cos (ωt) ˆi 1 Sportscar With reference to the sketch, the distance D traveled is the area under the plot of speed vs. time. The goal is to minimize the time while keeping D constant. This involves accelerating with maximum acceleration aa for time t0 and then braking with maximum (negative) acceleration ab to bring the car to rest. vmax aa t0 ab (T t0 ) ab T t0 aa ab ! 1 1 1 aa a b D vmax T aa t0 T T2 2 2 2 aa ab r 2D(aa ab ) aa a b ! ! ! ! 1 hr 1 100 km m 100 aa 3 s hr 1 km 3600 s 3 s ab 0 0(9 ) 6 s m)(6 7) 23 (6 )(7 ) VECTORS AND KINEMATICS 1 Particle with constant radial velocity (a) v rˆ θˆ (4 rˆ (3 m)(2 θˆ (b) (Note that radians are dimensionless.) p ˆ v vr 2 vθ 2 16 36 7 v (4 rˆ 6 θ) θˆ a 2 ) rˆ 0 and 0 ar 2 m)(2 aθ 2(4 16 q a a2r a2θ 144 256 20 1 Jerk For uniform motion in a circle, θ ωt, where the angular speed ω is constant. r r rˆ R rˆ v θˆ ω R θˆ a 2 rˆ rˆ Let j jerk. da dr θˆ dt dt The vector diagram (drawn for R 2 and ω 1) rotates rigidly as the point moves around the circle. 13 VECTORS AND KINEMATICS 15 (b) The speed v(t) is the area under the curve of a(t). As the sketch indicates, v(t) increases with time up to t T , and then decreases. The maximum speed vmax therefore occurs at t T , so that vmax v(T ). Z T Z T 1 vmax v(0) a(t )dt am cos 2 0 0 1 1 am sin am T 2 2 (c) For t T , we can use the small angle approximation: 1 sin θ θ3 . . Z t 3! 1 am sin v(t) 2 0 1 am ) )3 . . 2 3!! ! am 1 π2 t 3 2 3 ) t am 2 3! 3 T2 (d) direct method: Let the distance at time t be x(t). Z x(t) where Z 1 t a(t )dt 0 t T v(t) 2 0 am sin 0 t T 2 Z t Z T T t 2T a(t )dt v(t) 0 T am t T sin T t 2T 2 (Note that v(2T ) 0.) Then D x(2T ) Z Z am 2T am T sin (2πt sin 2 0 2 T am 2 T 2 continued next page 16 VECTORS AND KINEMATICS (e) symmetry method: symmetry, the distance from x(0) to x(T ) and the distance from x(T ) to x(2T ) are equal. The distance from x(0) to x(T ) is Z T x(T ) 0 Z am T sin 2 0 T am 2 am cos 0 T 2 4 symmetry 1 D 2x(T ) am T 2 2 as before. 1 Rolling tire Let x, y be the coordinates of the pebble measured from the stationary origin. Let ρ be the vector from the stationary origin to the center of the rolling tire, and let be the vector from the center of the tire to the pebble. ρ Rθ ˆi R ˆj sin θ ˆi R cos θ ˆj From the diagram, the vector from the origin to the pebble is x ˆi y ˆj ρ Rθ ˆi R ˆj R sin θ ˆi R cos θ ˆj x Rθ R sin θ R R cos θ y R R cos θ R sin θ The tire is rolling at constant speed without slipping: θ ωt continued next page Kleppner Kolenkow Mechanics Solutions Download Download AI Tools Ask AI Multiple Choice Flashcards Quiz Video Audio Lesson 91 2 Was this document helpful? 91 2 Save Kleppner Kolenkow Mechanics Solutions Course: University Physics (PHYS 180) 11 documents Students shared 11 documents in this course University: Yale University Info More info Download Download AI Tools Ask AI Multiple Choice Flashcards Quiz Video Audio Lesson 91 2 Was this document helpful? 91 2 Save Solutions Manual to accompany AN INTR ODUCTION T O MECHANICS 2nd edition V ersion 1 Nov ember 2013 KLEPPNER/K OLENK O W c Kleppner and K olenko w 2013 CONTENTS 1 VECT ORS AND KINEMA TICS 1 2 NEWT ON’S LA WS 21 3 FORCES AND EQU A TIONS OF MO TION 33 4 MOMENTUM 54 5 ENERGY 72 6 T OPICS IN D YNAMICS 89 7 ANGULAR MOMENTUM AND FIXED AXIS RO T A TION 105 8 RIGID BOD Y MOTION 138 9 NONINERTIAL SYSTEMS AND FICTITIOUS FORCES 147 10 CENTRAL FORCE MO TION 156 11 THE HARMONIC OSCILLA T OR 171 12 THE SPECIAL THEOR Y OF RELA TIVITY 182 13 RELA TIVISTIC D YNAMICS 196 14 SP A CETIME PHYSICS 206 Too long to read on your phone? Save to read later on your computer Save to a Studylist 1.1 V ector algebra 1 A=(2 ˆ i−3 ˆ j+7 ˆ k)B=(5 ˆ i+ˆ j+2 ˆ k) (a)A+B=(2+5)ˆ i+(−3+1)ˆ j+(7+2)ˆ k=7 ˆ i−2 ˆ j+9 ˆ k (b)A−B=(2−5)ˆ i+(−3−1)ˆ j(7−2)ˆ k=−3 ˆ i−4 ˆ j+5 ˆ k (c)A·B=(2)(5)+(−3)(1)+(7)(2)=21 (d)A×B= ˆ i ˆ j ˆ k 2−3 7 5 1 2  =−13 ˆ i+31 ˆ j+17 ˆ k 1.2 V ector algebra 2 A=(3 ˆ i−2 ˆ j+5 ˆ k)B=(6 ˆ i−7 ˆ j+4 ˆ k) (a)A 2=A·A=3 2+(−2)2+5 2=38 (b)B 2=B·B=6 2+(−7)2+4 2=101 (c)(A·B)2=[(3)(6)+(−2)(−7)+(5)(4)]2=[18+14+20]2=52 2=2704 2 VECT ORS AND KINEMA TICS 1.3 Cosine and sine by vector alge bra A=(3 ˆ i+ˆ j+ˆ k)B=(−2 ˆ i+ˆ j+ˆ k) (a) A·B=A B cos(A,B) cos(A,B)=A·B A B =(−6+1+1) √(9+1+1)√4+1+1)=−4 √11√6≈0.492 (b)method 1: |A×B|=A B sin(A,B) sin(A,B)=|A×B| A B A×B= ˆ i ˆ j ˆ k 3 1 1 −2 1 1  =(1−1)ˆ i−(3+2)ˆ j+(3+2)ˆ k=−5 ˆ j+5 ˆ k |A×B|=√5 2+5 2=5√2 sin(A,B)=|A×B| A B=5√2 √11√6≈0.870 (c)method 2(simpler)–use: sin 2 θ+cos 2 θ=1 sin(A,B)=p 1−cos 2(A,B) =p 1−(0.492)2 from(a)≈0.871 1.4 Direction cosines Note that here α,β,γ stand for direction cosines,not for the angles sho wn in the figure: θ x=cos−1 α, θ y=cos−1 β, θ z=cos−1 γ. continued next pa ge=⇒ Document continues below Discover more from: University PhysicsPHYS 180Yale University 11 documents Go to course 2 Midterm 2 Practice Problem Set Physics University Physics Practice materials 100% (2) 2 Credit Issue - Vocabulary Answer Key University Physics Coursework 100% (1) 73 Thoracic trauma 2 University Physics Tutorial work 100% (1) 52 Capítulo 3: Fuerza y movimiento en física universitaria University Physics Lecture notes None 3 Ionic Bonding Profile: Seeking Chemistry & Stability in Elements!University Physics Assignments None 1 Rutgers University-New Brunswick 2022 Admissions Essay Guidelines University Physics Coursework None ### Discover more from: University PhysicsPHYS 180Yale University11 documents Go to course 2 Midterm 2 Practice Problem Set Physics University Physics 100% (2) 2 Credit Issue - Vocabulary Answer Key University Physics 100% (1) 73 Thoracic trauma 2 University Physics 100% (1) 52 Capítulo 3: Fuerza y movimiento en física universitaria University Physics None 3 Ionic Bonding Profile: Seeking Chemistry & Stability in Elements! University Physics None 1 Rutgers University-New Brunswick 2022 Admissions Essay Guidelines University Physics None VECT ORS AND KINEMA TICS 3 A=A x ˆ i+A y ˆ j+A z ˆ k A x=A·ˆ i=A cos(A,ˆ i)≡A α α=cos(A,ˆ i)=cos θ x. Similarly, A y=A cos(A,ˆ j)≡A β β=cos(A,ˆ j)=cos θ y A z=A cos(A,ˆ k)≡A γ γ=cos(A,ˆ k)=cos θ z Using these results, A 2=A 2 x+A 2 y+A 2 z =A 2(α 2+β 2+γ 2) from which it follo ws that α 2+β 2+γ 2=1 Another way to see this is A 2=ρ 2+A 2 z=A 2 x+A 2 y+A 2 z=A 2(α 2+β 2+γ 2) and it follo ws as before that α 2+β 2+γ 2=1. 1.5 P erpendicular vectors Gi ven|A−B|=|A+B|with A and B nonzero.Ev aluate the magnitudes by squaring. A 2−2 A·B+B 2=A 2+2 A·B+B 2 −2 A·B=+2 A·B. A·B=0 and it follo ws that A⊥B. 1 out of 216 Share Download Download More from:University Physics(PHYS 180) More from: University PhysicsPHYS 180Yale University 11 documents Go to course 2 Midterm 2 Practice Problem Set Physics University Physics Practice materials 100% (2) 2 Credit Issue - Vocabulary Answer Key University Physics Coursework 100% (1) 73 Thoracic trauma 2 University Physics Tutorial work 100% (1) 52 Capítulo 3: Fuerza y movimiento en física universitaria University Physics Lecture notes None ### More from: University PhysicsPHYS 180Yale University11 documents Go to course 2 Midterm 2 Practice Problem Set Physics University Physics 100% (2) 2 Credit Issue - Vocabulary Answer Key University Physics 100% (1) 73 Thoracic trauma 2 University Physics 100% (1) 52 Capítulo 3: Fuerza y movimiento en física universitaria University Physics None 3 Ionic Bonding Profile: Seeking Chemistry & Stability in Elements! University Physics None 1 Rutgers University-New Brunswick 2022 Admissions Essay Guidelines University Physics None Students also viewed International Security Notes Summary - Article: Betwixt And Between: The Liminal Period In Rites De Passage Summary, Article: Shakespeare In the Bush The Gift By Mauss Lecture Notes P1-46, 65-83 Power and Politics in Today's World Part 1 Rory Gilmore Book List English United States Company About us Studocu Premium Academic Integrity Jobs Blog Dutch Website Study Tools All Tools Ask AI AI Notes AI Quiz Generator Notes to Quiz Videos Notes to Audio Infographic Generator Contact & Help F.A.Q. Contact Newsroom Legal Terms Privacy policy Cookie Settings Cookie Statement Copyright & DSA View our reviews on Trustpilot English United States Studocu is not affiliated to or endorsed by any school, college or university. Copyright © 2025 StudeerSnel B.V., Keizersgracht 424-sous, 1016 GC Amsterdam, KVK: 56829787, BTW: NL852321363B01 Cookies give you a personalised experience We’re not talking about the crunchy, tasty kind. These cookies help us keep our website safe, give you a better experience and show more relevant ads. We won’t turn them on unless you accept. Want to know more or adjust your preferences? Reject all Accept all cookies Manage cookies
188811
https://bingweb.binghamton.edu/~suzuki/ThermoStatFIles/11.12%20FD%20Sommerfeld%20formula.pdf
1 Sommerfeld formula Masatsugu Sei Suzuki Department of Physics, SUNY at Binghamton (November 01, 2018) Arnold Johannes Wilhelm Sommerfeld: 5 December 1868 – 26 April 1951) was a German theoretical physicist who pioneered developments in atomic and quantum physics, and also educated and mentored a large number of students for the new era of theoretical physics. He served as doctoral supervisor for many Nobel Prize winners in physics and chemistry (only J. J. Thomson's record of mentorship is comparable to his). He introduced the 2nd quantum number (azimuthal quantum number) and the 4th quantum number (spin quantum number). He also introduced the fine-structure constant and pioneered X-ray wave theory. A Sommerfeld formula is an approximation method developed by Arnold Sommerfeld for the integrals represent statistical average using the Fermi-Dirac distribution. 2 2 4 (1) (3) 2 4 0 0 6 8 (5) (7) 6 8 10 12 (9) (11) 10 12 7 ( ) ( ) ( ) ( ) ( ) 6 360 31 127 ( ) ( ) 15120 604800 73 1414477 ( ) ( ) ... 3421440 653837184000 f g d g d g g g g g g                                    where ( ) 1 ( ) 1 f e      is the Fermi-Dirac distribution function.  is the chemical potential. When 1  (the condition of strong degeneracy), the derivative ( ) f     becomes a Dirac delta function, which takes a very sharp peak around    . The chemical potential  is dependent on temperature. 1. Derivation of Sommerfeld formula We assume that     0 ) ( ) ( dx x g G where ( ) g  is a slowly varying function around    ) ( ' ) (   G g  We now calculate the integral   0 0 0 0 0 ( ) ( ) ( ) '( ) ( ) ( ) ( ) ( ) ( ) [ ] ( ) I d f g d f G f f G d G f d G                                   3 Note that ) ( ) (          f which is the Dirac delta function having a very sharp peak at   . We expand ) ( G by using Taylor expansion around   .... ) ( ) ( ! 3 1 ) ( ) ( ! 2 1 ) ( ) ( ! 1 1 ) ( ) ( ) 3 ( 3 ) 2 ( 2 ) 1 (                    G G G G G Thus we get (1) 2 (2) 0 0 3 (3) (1) (2) 2 0 0 (3) 3 0 ( ) 1 1 ( ) ( ) [ ][ ( ) ( ) ( ) ( ) ( ) 1! 2! 1 ( ) ( ) .......] 3! 1 ( ) 1 ( ) ( ) ( ) ( ) 1! 2! 1 ( ) ( ) ... 3! f d f g d G G G G f f G G d G d f G d                                                                           We put ) (      x with 1   .       0 ) ( ) ( d g G ) ( ) ( ) 1 (   g G  , (2) (1) ( ) ( ) G g    , (3) (2) ( ) ( ) G g    ,….. Then we have 4 0 0 2 3 (1) (2) 2 3 4 5 (3) (4) 4 5 0 ( ) ( ) ( ) ( ) ( 1)( 1) 1 1 ( ) ( ) 2 ( 1)( 1) 6 ( 1)( 1) 1 1 ( ) ( ) ... 24 ( 1)( 1) 120 ( 1)( 1) 1 ( ) x x x x x x x x x x g x d f g g d dx e e x x g dx g dx e e e e x x g dx g dx e e e e g d                                                               2 (1) 2 4 (3) 4 ( ) 2 ( 1)( 1) 1 ( ) ... 24 ( 1)( 1) x x x x x g dx e e x g dx e e                    We have the Sommerfeld formula 2 4 (1) (3) 2 4 0 0 6 8 (5) (7) 6 8 10 12 (9) (11) 10 12 7 ( ) ( ) ( ) ( ) ( ) 6 360 31 127 ( ) ( ) 15120 604800 73 1414477 ( ) ( ) ... 3421440 653837184000 d f g g d g g g g g g                                    where 2 2 ( 1)( 1) 3 x x x dx e e         , 4 4 7 ( 1)( 1) 15 x x x dx e e         6 6 31 ( 1)( 1) 21 x x x dx e e         , 8 8 127 ( 1)( 1) 15 x x x dx e e         2. T dependence of the chemical potential We start with 0 ( ) ( ) N f D d      where 5 3/2 2 2 2 ( ) 2 V m D a             ℏ and 3/2 2 2 2 2 V m a         ℏ We get 0 2 2 2 0 2 3/2 2 2 ( ) ( ) ( ') ' '( ) 6 2 3 6 2 B B N f D d D d k T D a a k T                    . But we also have ) 0 (   T F   . Then we have 2 / 3 0 3 2 ) ( F a d D N F       . Thus the chemical potential is given by 2 3/2 3/2 2 2 2 2 3 3 6 2 F B a a a k T       , or 2 / 1 2 2 2 / 3 8 1                            F F B F T k       . which is valid for the order of 2         F BT k  in the above expansion formula. 3/2 2 1/2 2 2 2 1 1 8 8 B B F F F F k T k T                                 6 or 2 2 2 2 2/3 [1 ] 1 8 12 B B F F F k T k T                            The chemical potential  is approximated by the forms 2 2 [1 ] 12 B F F k T             (3D case) This result confirms that the chemical potential (Fermi level) remains close to the Fermi energy as long as B F k T   . As the temperature increases, the chemical potential falls below the Fermi energy by a margin that grows quadratically with the temperature. For the 1D case, similarly we have 2 2 [1 ] 12 B F F k T             (1D case) 3. Total energy and specific heat Using the Sommerfeld’s formula, the total energy U of the electrons is approximated by 0 2 2 0 2 2 0 ( ) ( ) 1 ( ) ( ) [ ( )] '( )] 6 1 ( ) ( ) ( ) ( ) [ ( ) '( )] 6 F B F F F F F F B F F F U f D d D d k T D D D d D k T D D                                     . The total number of electrons is also approximated by 0 2 2 0 2 2 0 ( ) ( ) 1 ( ) ( ) '( ) 6 1 ( ) ( ) ( ) ( ) '( ) 6 F B F F F B F N f D d D d k T D D d D k T D                            . Since 0 /    T N (N is independent of T), we have 7 2 2 1 ' ( ) '( ) 0 3 F B F D k TD       , or 2 2 '( ) 1 ' 3 ( ) F B F D k T D      . The internal energy U is 2 2 0 1 ( ) ( ) ( ) ( ) [ ( ) '( )] 6 F F F F B F F F U D d D k T D D                   The specific heat Cel is defined by 2 2 2 2 2 2 1 [ ( ) '( )] ' ( ) 3 1 1 ( ) [ '( ) ' ( )] 3 3 el B F F F F F B F F B F F dU C dT k T D D D k TD k TD D                     . The second term is equal to zero. So we have the final form of the specific heat 2 2 1 ( ) 3 el B F C k D T    . In the above expression of Cel, we assume that there are N electrons inside volume V (= L3). The specific heat per mol is given by T k N D T k N N D N N C B A F A B A F A el 2 2 2 2 ) ( 3 1 ) ( 3 1       . where NA is the Avogadro number and ) ( F A D  [1/(eV at)] is the density of states per unit energy per unit atom. Note that 2 2 3 1 B Ak N  =2.35715 mJ eV/K2. The entropy S is obtained as follows. 8 2 2 1 ( ) 3 el B F S C T k D T T       or 2 2 1 ( ) 3 B F S k D T      Thus we have 2 2 1 ( ) 3 B F S k D T    which is the same as the heat capacity. ____________ ((Note)) The heat capacity of free electron F B F F F F T k a a N D 2 3 2 3 3 2 ) ( 2 / 3        The electronic heat capacity per mol is F B F B B A F T T R T Rk T k T k N N D 2 2 3 3 1 ) ( 3 1 2 2 2 2       Then  is related to ) ( F A D  as ) ( 3 1 2 2 F A B A D k N    , or  (mJ/mol K2) = 2.35715 ) ( F A D  . (2) We now give the physical interpretation for Eq.(1). When we heat the system from 0 K, not every electron gains an energy kBT, but only those electrons in orbitals within an energy range kBT of the Fermi level are excited thermally. These electrons gain an energy of kBT. Only a fraction of the order of ( ) B F k TD  can be excited thermally. The total electronic thermal kinetic energy E is of the order of 2 ( ) ( ) B F k T D  . The specific heat Cel is on the order of 2 ( ) B F k TD  . ((Note)) 9 For Pb,  = 2.98, ) ( F A D  =1.26/(eV at) For Al  = 1.35, ) ( F A D  =0.57/(eV at) For Cu  = 0.695, ) ( F A D  =0.29/(eV at) Table 2 (mJ/mol K )  (H.P. Myers) Na: 1.38 Ti: 3.35 K: 2.08 V: 9.26 Mg: 1.3 Cr: 1.4 Al: 1.35 Mn: 9.2 Pb: 2.98 Fe: 4.98 Cu: 0.70 Co 4.73 Ag: 0.65 Ni 7.02 Au: 0.73 Pt: 7.0 4. Grand potential G  for free electrons with the use of Sommerfeld expansion Using the formula for fermions, 3/2 3/2 ( ) 2 3 0 2 3 1 2 G gVm d e           ℏ and 3/2 2 3 0 2 gVm N d      ℏ (at T = 0 K) with 2 g  for spin 1/2, we get the ratio 10 3/2 ( ) 0 0 5/2 2 2 3/2 5/2 2 2 2 2 2 2 5/2 2 2 2 1 2 ( ) 3 2 2 5 5 ( ) [1 ] 2 3 8 3 2 5 [1 ] 5 8 2 5 [1 ] [1 ] 5 12 8 2 5 5 [1 ][1 5 24 8 F G B F B F F B B F F F B B F F F d e N d k T k T k T k T k T k T                                                                                 2 ]    or 2 2 2 5 [1 ] 5 12 B G F F k T N             where we use the Sommerfeld expansion 2 2 (1) ( ) 0 0 ( ) ( ) ( ) ( ) 1 6 B k T g d g d g e                 We note that 2 ln 3 G B G U PV k T Z     The internal energy is 2 2 2 3 5 [1 ] 3 5 12 B G F F k T U              The entropy S is obtained by 11 2 2 , 3 G B V F k T S N T              5. Summary (i) Internal energy U 0 0 2 2 0 ( ) ( ) ( ) ( ) ( ) ( ) '( )] 6 B U D f d u f d u d k T u                      or 2 2 0 ( ) ( ) [ ( ) '( )] 6 B F F F U u d k T D D            where ( ) ( ) u D     , '( ) ( ) '( ) u D D       (ii) Number N: 0 2 2 0 ( ) ( ) ( ) ( ) '( ) 6 B N D f d D d k T D               Since N is independent of T, 2 2 0 ' ( ) '( ) 3 B F D k TD       or 12 2 2 0 ' ( ) '( ) 3 B D k TD       (iii) Heat capacity C: 2 2 2 2 2 2 2 2 2 2 ' ( ) '( ) 3 ' ( ) [ ( ) '( )] 3 [ ' ( ) '( )] ( )] 3 3 ( )] 3 B B B B B U C T u k Tu D k T D D D k TD k TD k TD                              or C T   with 2 2 ( )] 3 B F k D     Note that ( ) ( ) u D     '( ) ( ) '( ) u D D       (vi) Grand potential G  0 0 0 ln ( )ln(1 ) 1 ( ) 1 1 ( ) ( ) G Z D ze d d e z f d                      13 where '( ) ( ) D    Thus we have 0 2 2 0 ln ( ) ( ) [ ( ) ( ) '( )] 6 G B G B k T Z f d d k T                 When ( ) D a    we get 3/2 2 ( ) ( ) 3 D d a        3/2 3 ( ) ( ) ( ) 2 u D a         The internal energy is 2 2 0 2 2 0 1 ( ) ( ) '( )] 6 3 1 [ ( ) ( ) '( )] 2 6 3 2 B B G U u d k T u d k T                   or 2 3 G PV U   14 6. Derivation of entropy S (by R. Kubo) Here we show the method used by Kubo. This method is very instructive to students. The chemical potential can be derived as follows. We start with 0 0 ( ) '( ) ( ) F F F N D d d             (1) at T = 0 K, where ( ) '( ) D    . 2 2 0 0 1 ( ) ( ) ( ) ( ) '( ) 6 B F N f D d D d k T D               (2) Subtracting Eq.(1) from Eq.(2), we get 2 2 1 ( ) ( ) '( ) 0 6 F B F D d k T D          Noting that ( ) ( )( ) F F F D d D           we have 2 2 1 ( )( ) ( ) '( ) 0 6 F F B F D k T D         Then the chemical potential is obtained as 2 2 2 2 2 2 '( ) ( ) 6 ( ) ln ( ) ( ) [ ] 6 "( ) ( ) 6 '( ) F F F B F B F B F D k T D d D k T d k T                   15 or 2 2 "( ) ( ) 6 '( ) F F B F k T         where '( ) ( ) D    The internal energy: 0 2 2 0 2 2 0 ( ) ( ) ( ) ( ) [ ( )]| 6 ( ) ( ) [ ( ) '( )] 6 F B B F F F U D f d d D d k T D d D d k T D D                                 Here we use the Taylor expansion 0 0 ( ) ( ) ( ) ( ) F F F F D d D d D                  Thus the internal energy can be rewritten as 2 2 0 2 2 2 2 0 2 2 2 2 0 2 2 0 ( ) ( ) ( ) ( ) [ ( ) '( )] 6 '( ) ( ) ( ) ( ) ( ) [ ( ) '( )] 6 ( ) 6 ( ) ( ) '( ) ( ) [ ( ) '( )] 6 6 ( ) ( ) ( 6 F F F F F F F B F F F F B F F B F F F F B F F B F F F B U D d D k T D D D D d k T D k T D D D D d k T D k T D D D d k T D                                                              ) F or 16 2 2 0 2 2 0 ( ) ( ) '( ) 6 ( ) ( ) ( ) '( ) 6 F F B F F F B F U D d k T d k T                    where 0 0 0 ( ) '( ) ( ) ( ) F F F F F D d d d                  The heat capacity: 2 2 2 2 '( ) ( ) 3 3 B F B F dU C k T k D T dT        Note that ( ) F B D k T  is the number of particles which are in the softened edge in the vicinity of Fermi energy. The grand potential: 0 0 0 2 2 0 ln ( )ln(1 ) 1 ( ) 1 1 ( ) ( ) [ ( ) ( ) '( )] 6 G B Z D ze d d e z f d d k T                               Using the Taylor expansion, 0 0 ( ) ( ) ( ) ( ) F F F d d               17 2 2 0 2 2 0 2 2 2 2 0 ln ( ) ( ) ( ) ( ) '( ) 6 "( ) 1 ( ) ( ) [ ( ) '( )] 6 '( ) "( ) ( ) [ '( )] 1 ( ) ( ) '( ){ } 6 [ '( )] F F F G B G F F B F F B F F F F F F B F F k T Z d k T d k T d k T                                           where 2 2 2 2 '( ) "( ) 1 1 ( ) ( ) 6 ( ) 6 '( ) F F F B B F F D k T k T D             Thus we have 2 2 2 2 0 "( ) ( ) [ '( )] 1 ( ) ( ) '( ){ } 6 [ '( )] F G F F F B F F PV d k T                using these expressions, the entropy can be obtained as follows. 2 2 2 2 0 2 2 0 2 2 2 2 2 2 "( ) ( ) [ '( )] 1 ( ) ( ) '( ){ } 6 [ '( )] ( ) ( ) ( ) '( ) 6 "( ) 1 [ ( ) ] ( ) 6 '( ) 1 ( ) '( ) 3 1 ( ) ( ) 3 F F G F F F B F F F F B F F F B F F B F B F ST U N d k T d k T k T k T k T D                                          or 2 2 1 ( ) 3 B F S k TD    7. Grand potential 18 The Gibbs free energy: G N F PV U ST PV        The grand potential is given by G PV U ST N       Since ( ) G d d U ST N TdS PdV dN SdT TdS dN Nd SdT PdV Nd                    we have , G T V N          , , G V S T          We now calculate the grand potential using the Sommerfeld formula, 0 0 0 ln ( )ln(1 ) 1 ( ) 1 1 ( ) ( ) G B G B PV k T Z k T D ze d d e z f d                         with '( ) ( ) D    , ( ) 1 1 ( ) 1 1 1 f e e z         8. The derivation of S from the grand potential The entropy S is given by 19 , 0 ( ( ) G V f S d T T                          Noting that ( f f T T                        ST can be rewritten as 0 0 ( )( ) ( ) ( ) f TS d h f d                 where ( ) [( ) ( )] ( ) '( ) ( ) h               and '( ) ( ) "( ) 2 '( ) h         Using the Sommerfeld formula, we get 2 2 0 2 2 0 ( ) ( ) '( ) 6 ( ) ( ) '( ) 3 B B TS h d k T h h d k T                 Here we note that 0 0 0 ( ) ( ) [ ( )( )] 0 h d h d                Thus ST can be approximated as 20 2 2 2 2 ( ) '( ) ( ) ( ) 3 3 B F B F TS k T k T D       The entropy S is 2 2 ( ) 3 B F S k TD    9. The derivation of the number N from the grand potential The number N is expressed by , G T V N          Note that 0 , 0 0 0 ( ( ) ( ) ( ) '( ) ( ) ( ) ( ) G T V T f d f d f d D f d                                              where ( ) ( ) T f f                This leads to the results which is familiar to us, 0 ( ) ( ) N D f d      10. Derivation of the entropy using the Maxwell’s relation From the thermodynamics, dG van be expressed as 21 ( ) dG d U ST PV TdS PdV dN SdT TdS PdV VdP SdT VdP dN                leading to the relations , P N G S T         , , T N G V P         , , T P G N          From these relations, the Maxwell’s relation can be expressed by , , P N P T S T N                   since 2 , , , P N P N T P P G G T T N T N                                   2 , , , P T P T P N P S G G N N T N T                                  Suppose that N depends only T and . We get ( , ) ( , ) ( , ) ( , ) ( , ) ( , ) ( , ) ( , ) ( , ) ( , ) N T N T T N N T T N T N T N T T N T N                                              22 and ( , ) ( , ) ( , ) ( , ) ( , ) ( , ) T T T S S T N N T S T T N T T S N                                   Thus we have the relation T S N T                   Here we note that 2 2 0 1 ( ) ( ) '( ) 6 B N D d k T D         Then we get 2 2 1 ( ) ( ) "( ) ( ) 6 B F T N D k T D D                 2 2 2 2 '( ) '( ) 3 3 B B F N k TD k TD T               (a) Chemical potential Using the above relation, we have 2 2 '( ) 3 ( ) F B N F T N T D k T T D N                                 or 23 2 2 2 '( ) 6 ( ) F F B F D k T D        (b) Entropy S Using the above relation, we can calculate the entropy S as 2 2 '( ) 3 B T S N k TD T                       The integration of this with respect to  leads to 2 2 2 2 ( ) ( ) 3 3 B B F S k TD k TD       11. Sommerfeld formula: 1D case We consider the case of one-dimensional system. 1 1 0 0 ( ) ( ) ( ) F N d f D d D d             where the density of the state for the 1D system is 1/2 1/2 1 2 2 ( ) L m D            ℏ since 1/2 1 2 2 ( ) 2 2 2 L L m d D d dk                     ℏ The factor 2 of 2dk comes from the even function of the 1D energy dispersion relation. We use the Sommerfeld formula 2 2 2 0 0 ( ) ( ) ( ) '( ) 6 B N d f d d k T              where 1/2 1/2 1/2 1 1 2 2 ( ) ( ) L m D a                 ℏ 24 1/2 3/2 3/2 1 2 ( ) 2 1 1 ( ) 2 2 d L m a d                 ℏ Thus we have 2 1/2 2 2 3/2 1 1 0 2 1/2 2 2 3/2 1 1 1 )( ) 6 2 2 12 B B N a d d k T a a k T a                  (1) But we also have 1/2 1/2 1 1 1 0 0 ( ) 2 F F F N d D a d a              (2) Combining Eqs.(1) and (2), we get 2 1/2 1/2 2 2 3/2 1 1 1 2 2 12 F B a a k T a       or 2 1/2 1/2 2 2 3/2 24 F B k T       Then the chemical potential  is 1/2 2 2 2 3/2 1/2 2 2 2 2 1 24 1 24 B F F B F k T k T                 or 2 2 2 2 2 2 2 2 2 (1 ) 1 24 12 B B F F F k T k T          We make a plot of the number distribution as a function of / F x   25 2 2 1 1 ( ) ( ) 1 exp[ (1 )] 1 12 D f x x          where / B F k T    is changed as a parameter. We choose 0.3  . Fig. The number distribution vs / F x   . / 0.3 B F k T     for the 1D case. The chemical potential for the temperature with / 0.3 B F k T     is denoted by the dashed line. It shifts to the high energy side from the Fermi energy at T = 0 K (x = 1). The area for 1 x  (shaded region denoted by green) is the same as the area below 1 x . 26 27 28 12. Sommerfeld formula for the 3D system Next we consider the case of three-dimensional system. 3 3 0 0 ( ) ( ) ( ) F N d f D d D d             where the density of states for the 3D system is 29 3/2 1/2 3 2 2 2 ( ) 2 V m D           ℏ We use the Sommerfeld formula 2 2 2 0 0 ( ) ( ) ( ) '( ) 6 B N d f d k T              Thus we have 2 1/2 2 2 1/2 3 3 0 2 3/2 2 2 1/2 3 3 1 6 2 2 3 12 B B N a d k T a a k T a                Combining Eqs.(1) and (2), we get 2 3/2 3/2 2 2 1/2 3 3 3 2 2 3 3 12 F B a a k T a       or 2 3/2 3/2 2 2 1/2 8 F B k T       Then the chemical potential is 3/2 2 1/2 2 2 3/2 2 2 2 2 1 8 1 8 B F F B F k T k T                 or 2 2 2 2 2 2 2/3 2 2 (1 ) 1 8 12 B B F F F k T k T               or 30 2 2 2 2 (1 ) 12 B F F k T       Fig. Chemical potential as a function of temperature for the ideal 3D Fermi gas and the 1D Fermi gas. 31 32 33 Fig. The number distribution vs / F x   . / B F k T     0.1 – 0.6 for the 3D case. The chemical potential for the temperature with / B F k T    is denoted by the dashed line. It shifts to the low energy side from the Fermi energy at T = 0 K (x = 1) with increasing temperature. The area for 1 x  (shaded region denoted by green) is the same as the area below 1 x . 13. Exact solution for the chemical potential for the 2D case The density of stares for the 2D system is 2 2 2 2 2 2 2 2 2 1 ( ) 2 2 (2 ) (2 ) 2 L L m D d kdk d                ℏ or 2 2 2 ( ) mL D d d      ℏ , 2 2 2 ( ) mL D    ℏ The density of states for the 2D system is independent of . The chemical potential can be evaluated exactly as follows. In other words, we do not have to use the Sommerfeld approximation. 2 2 2 2 0 ( ) F D F mL N D d         ℏ 34 2 2 2 ( ) 2 ( ) 0 0 ( ) 1 1 D D d mL d N e e                 ℏ Then we have 2 2 F D n m    ℏ with 2 2 2 D D N n L  We get ( ) 0 0 1 1 1 F d d e e z               We put x e  dx e d xd          0 1 1 1 1 1 1 1 1 1 1 1 ( ) 1 [ln( )] 1 ln(1 ) d dx e x x z z zdx x x z dx x x z x x z z                                   ln(1 ) F z    or 35 1 F e z  Thus we have ln( 1) F B k T e   Fig. Chemical potential of the 2D system. Plot of / F y   as a function of / B F y k T   x kBT F y F 2D system 0.5 1.0 1.5 2.0 2.5 3.0 2 1 1 36 Fig. Chemical potential of the 1D, 2D, and 3D systems. Plot of / F y   as a function of / B F y k T   REFERENCES J. Rau, Statistical Physics and Thermodynamics, An Introduction to Key Concepts (Oxford, 2017). R. Kubo, Statistical Mechanics An Advanced Course with Problems and Solutions (North-Holland, 1965). M.D. Sturge, Statistical and Thermal Physics, Fundamentals and Applications (A.K. Peters, 2003). H.P. Myers, Introductory Solid State Physics, second edition (Taylor and Francis, 1997). 2D 3D 1D x kB T F y F 0.5 1.0 1.5 2.0 2 1 1 2 3 4
188812
https://old.maa.org/press/periodicals/convergence/illustrating-the-nine-chapters-on-the-mathematical-art-their-use-in-a-college-mathematics-history-0
Illustrating The Nine Chapters on the Mathematical Art: Their Use in a College Mathematics History Classroom – Conclusion and References | Mathematical Association of America Skip to main content Home Math Careers Contact Us Login Search form Search Login Join Give Events About MAA MAA History MAA Centennial MathDL Spotlight: Archives of American Mathematics MAA Officers MAA to the Power of New Governance Council and Committees Governance Documents Bylaws Policies and Procedures MAA Code of Conduct Policy on Conflict of Interest Statement about Conflict of Interest Recording or Broadcasting of MAA Events Policy for Establishing Endowments and Funds Avoiding Implicit Bias Copyright Agreement Principal Investigator's Manual Advocacy Our Partners Advertise with MAA Employment Opportunities Staff Directory Contact Us 2022 Impact Report In Memoriam Membership Membership Categories Membership Renewal Member Discount Programs MERCER Insurance MAA Member Directories New Member Benefits MAA Publications Periodicals The American Mathematical Monthly Mathematics Magazine The College Mathematics Journal Loci/JOMA Browse How to Cite Communications in Visual Mathematics Convergence About Convergence What's in Convergence? Convergence Articles Images for Classroom Use Mathematical Treasures Portrait Gallery Paul R. Halmos Photograph Collection Other Images Critics Corner Quotations Problems from Another Time Conference Calendar Guidelines for Convergence Authors MAA FOCUS Math Horizons Submissions to MAA Periodicals Guide for Referees Scatterplot Blogs Math Values MAA Book Series MAA Press (an imprint of the AMS) MAA Notes MAA Reviews Browse MAA Library Recommendations Additional Sources for Math Book Reviews About MAA Reviews Mathematical Communication Information for Libraries Author Resources Advertise with MAA Meetings MAA MathFest Propose a Session Proposal and Abstract Deadlines MAA Policies Invited Paper Session Proposals Contributed Paper Session Proposals Panel, Poster, Town Hall, and Workshop Proposals Minicourse Proposals MAA Section Meetings Virtual Programming Joint Mathematics Meetings Calendar of Events MathFest Archive MathFest Programs Archive MathFest Abstract Archive Historical Speakers MAA Code of Conduct Competitions About AMC FAQs Information for School Administrators Information for Students and Parents Registration Getting Started with the AMC AMC Policies AMC Administration Policies Important AMC Dates Competition Locations AMC 8 AMC 10/12 Invitational Competitions Putnam Competition Putnam Competition Archive AMC International AMC Resources Curriculum Inspirations Sliffe Award MAA K-12 Benefits Mailing List Requests Statistics & Awards Programs Submit an NSF Proposal with MAA MAA Distinguished Lecture Series Curriculum Resources Classroom Capsules and Notes Browse Common Vision Course Communities Browse CUPM Curriculum Guide INGenIOuS Instructional Practices Guide Möbius MAA Placement Test Suite META Math META Math Webinar May 2020 Progress through Calculus Survey and Reports Outreach Initiatives "Camp" of Mathematical Queeries Dolciani Mathematics Enrichment Grants DMEG Awardees National Research Experience for Undergraduates Program (NREUP) Neff Outreach Fund Neff Outreach Fund Awardees Tensor SUMMA Grants Tensor Women & Mathematics Grants Grantee Highlight Stories Professional Development "Best Practices" Statements CoMInDS CoMInDS Summer Workshop 2023 MAA Travel Grants for Project ACCCESS OPEN Math 2024 Summer Workshops Minority Serving Institutions Leadership Summit Previous Workshops Frequently Asked Questions PIC Math Course Resources Industrial Math Case Studies Participating Faculty 2020 PIC Math Student Showcase Previous PIC Math Workshops on Data Science Project NExT Fellows Application FAQ Dates and Locations Past Programs Leadership Team Support Project NExT Section NExT StatPREP Virtual Programming Communities MAA Sections Section Meetings MAA Section Officers' Meetings Section Officers Meeting History Preparations for Section Meetings Deadlines and Forms Bylaws Template Section Programs Editor Lectures Program MAA Section Lecturer Series Officer Election Support Section Awards Section Liaison Programs Section NExT Section Visitors Program Policies and Procedures Expense Reimbursement Guidelines for Bylaw Revisions Guidelines for Local Arrangement Chair and/or Committee Guidelines for Section Webmasters MAA Logo Guidelines MAA Section Email Policy Section Newsletter Guidelines Statement on Federal Tax ID and 501(c)3 Status Section Resources Communication Support Guidelines for the Section Secretary and Treasurer Legal & Liability Support for Section Officers Section Marketing Services Section in a Box Subventions and Section Finances Web Services SIGMAA Joining a SIGMAA Forming a SIGMAA History of SIGMAA SIGMAA Officer Handbook Frequently Asked Questions MAA Connect Students Meetings and Conferences for Students Undergraduate Research Opportunities to Present Information and Resources MAA Undergraduate Student Poster Session Undergraduate Research Resources MathFest Student Paper Sessions Research Experiences for Undergraduates Student Poster Session FAQs Student Resources High School Graduate Students A Graduate School Primer Fun Math Reading List Student Chapters MAA Awards Awards Booklets Writing Awards Carl B. Allendoerfer Awards Chauvenet Prizes Regulations Governing the Association's Award of The Chauvenet Prize Trevor Evans Awards Paul R. Halmos - Lester R. Ford Awards Merten M. Hasse Prize George Pólya Awards David P. Robbins Prize Beckenbach Book Prize Euler Book Prize Daniel Solow Author’s Award Teaching Awards Henry L. Alder Award Deborah and Franklin Tepper Haimo Award Service Awards Certificate of Merit Gung and Hu Distinguished Service JPBM Communications Award Meritorious Service MAA Award for Inclusivity T. Christine Stevens Award Research Awards Dolciani Award Dolciani Award Guidelines Morgan Prize Morgan Prize Information Annie and John Selden Prize Selden Award Eligibility and Guidelines for Nomination Selden Award Nomination Form Lecture Awards AMS-MAA-SIAM Gerald and Judith Porter Public Lecture AWM-MAA Falconer Lecture Etta Zuber Falconer Hedrick Lectures James R. C. Leitzel Lecture Pólya Lecture Pólya Lecturer Information Putnam Competition Individual and Team Winners D. E. Shaw Group AMC 8 Awards & Certificates Maryam Mirzakhani AMC 10 A Awards & Certificates Two Sigma AMC 10 B Awards & Certificates Jane Street AMC 12 A Awards & Certificates Akamai AMC 12 B Awards & Certificates High School Teachers News Our Blog MAA Social Media RSS You are here Home » MAA Publications » Periodicals » Convergence » Illustrating The Nine Chapters on the Mathematical Art: Their Use in a College Mathematics History Classroom – Conclusion and References Illustrating The Nine Chapters on the Mathematical Art: Their Use in a College Mathematics History Classroom – Conclusion and References ‹ Illustrating The Nine Chapters on the Mathematical Art - In the Classroom: Civil Servantsup Author(s): Joel K. Haack (University of Northern Iowa) Conclusion I hope that you can see the contribution made by the study tour of China to my appreciation of the Nine Chapters. Although cultural context for the Nine Chapters was merely a small part of what we learned on the trip, we can now share it with our students and colleagues. I have found that it truly enriches the experience of my students in the history of mathematics. About the Author Joel Haack is a professor of mathematics at the University of Northern Iowa, with mathematical interests that include the history of mathematics, algebra, probability and statistics, and the connections between mathematics and the arts and humanities. Joel and his wife Linda were frequent travelers with the MAA Mathematical Study Tours, having traveled with the group to Greece, England, Mexico, China, Russia, Switzerland, Germany, Ecuador, Guatemala, Honduras, and Italy. Returning to the faculty after 20 years in administration, Joel is pleased to be able to share what he learned on the MAA Study Tours with his students. References Dauben, Joseph W., "Chinese Mathematics," The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook, ed. Victor Katz, Princeton: Princeton University Press, 2007, 187-384. Haack, Joel, "A Sampling of The Nine Chapters on the Mathematical Art," ICTM [Iowa Council of Teachers of Mathematics] Journal31 (Spring, 2005), 23-28. Joseph, George Gheverghese, The Crest of the Peacock: Non-European Roots of Mathematics, New York: Penguin Books, 1991. Martzloff, Jean-Claude, A History of Chinese Mathematics, Berlin: Springer-Verlag, 1997. Shen Kangshen, John N. Crossley, and Anthony W.-C. Lun, The Nine Chapters on the Mathematical Art Companion and Commentary, Oxford: Oxford University Press and Beijing: Science Press, 1999. Swetz, Frank and T. I. Kao, Was Pythagoras Chinese? An Examination of Right Triangle Theory in Ancient China, University Park, PA: The Pennsylvania State University Press and Reston, VA: National Council of Teachers of Mathematics, 1977. Wikipedia, "The Nine Chapters on the Mathematical Art," accessed April 7, 2017. Joel K. Haack (University of Northern Iowa), "Illustrating The Nine Chapters on the Mathematical Art: Their Use in a College Mathematics History Classroom – Conclusion and References," Convergence (May 2017) Convergence Tags: History of Mathematics Non-Western Cultures Printer-friendly version Dummy View - NOT TO BE DELETED Get Ready: Our Brand New Website is Coming Soon! 2024 MAA Awards & Prize Winners Announced! Register for our OPEN Math Summer Workshops Register for MathFest 2024! 1 2 3 4 Previous Next Illustrating The Nine Chapters on the Mathematical Art: Their Use in a College Mathematics History Classroom Illustrating The Nine Chapters on the Mathematical Art: Their Use in a College Mathematics History Classroom – The Text and Its Contents Illustrating The Nine Chapters on the Mathematical Art - In the Classroom: Field Measurement Illustrating The Nine Chapters on the Mathematical Art - In the Classroom: Rice-huskers and City Walls Illustrating The Nine Chapters on the Mathematical Art - In the Classroom: Excess and Deficit Illustrating The Nine Chapters on the Mathematical Art - In the Classroom: Right-angled Triangles Illustrating The Nine Chapters on the Mathematical Art - In the Classroom: Civil Servants Illustrating The Nine Chapters on the Mathematical Art: Their Use in a College Mathematics History Classroom – Conclusion and References MAA Publications Periodicals The American Mathematical Monthly Mathematics Magazine The College Mathematics Journal Loci/JOMA Convergence About Convergence What's in Convergence? Convergence Articles Images for Classroom Use Critics Corner Quotations Problems from Another Time Conference Calendar Guidelines for Convergence Authors MAA FOCUS Math Horizons Submissions to MAA Periodicals Guide for Referees Scatterplot Blogs MAA Book Series MAA Press (an imprint of the AMS) MAA Notes MAA Reviews Mathematical Communication Information for Libraries Author Resources Advertise with MAA About MAA MAA History MAA to the Power of New Governance Policies and Procedures Advocacy Our Partners Advertise with MAA Employment Opportunities Staff Directory Contact Us 2022 Impact Report In Memoriam Membership Membership Categories Membership Renewal Member Discount Programs MAA Member Directories New Member Benefits MAA Publications Periodicals The American Mathematical Monthly Mathematics Magazine The College Mathematics Journal Loci/JOMA Convergence About Convergence What's in Convergence? Convergence Articles Images for Classroom Use Critics Corner Quotations Problems from Another Time Conference Calendar Guidelines for Convergence Authors MAA FOCUS Math Horizons Submissions to MAA Periodicals Guide for Referees Scatterplot Blogs MAA Book Series MAA Press (an imprint of the AMS) MAA Notes MAA Reviews Mathematical Communication Information for Libraries Author Resources Advertise with MAA Meetings MAA MathFest Propose a Session MAA Section Meetings Virtual Programming Joint Mathematics Meetings Calendar of Events MathFest Archive MAA Code of Conduct Competitions About AMC Registration Getting Started with the AMC AMC Policies AMC Administration Policies Important AMC Dates Competition Locations AMC 8 AMC 10/12 Invitational Competitions Putnam Competition AMC International AMC Resources Statistics & Awards Programs Submit an NSF Proposal with MAA MAA Distinguished Lecture Series Curriculum Resources Outreach Initiatives Professional Development Virtual Programming Communities News Our Blog MAA Social Media RSS Connect with MAA Facebook Twitter YouTube Sign up for emails Mathematical Association of America P: (800) 331-1622 F: (240) 396-5647 Email:maaservice@maa.org Copyright © 2025 Terms of Use Privacy Policy Mobile Version ShareThis Copy and Paste
188813
https://datazone.birdlife.org/species/factsheet/bar-tailed-godwit-limosa-lapponica
Bar-tailed Godwit Limosa Lapponica Species Factsheet | BirdLife DataZone Consent Details [#IABV2SETTINGS#] About This website uses cookies We use cookies to offer you a better experience and analyse site traffic. By clicking the 'Ok, I consent' button, you understand and consent to the use of cookies in accordance with our Privacy Policy. Consent Selection Necessary [x] Preferences [x] Statistics [x] Marketing [x] Show details Details Necessary 5- [x] Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies. Cookiebot 2Learn more about this providerCookieConsentStores the user's cookie consent state for the current domainMaximum Storage Duration: 1 yearType: HTTP Cookie 1.gifUsed to count the number of sessions to the website, necessary for optimizing CMP product delivery. Maximum Storage Duration: SessionType: Pixel Tracker Vimeo 1Learn more about this provider__cf_bmThis cookie is used to distinguish between humans and bots. This is beneficial for the website, in order to make valid reports on the use of their website.Maximum Storage Duration: 1 dayType: HTTP Cookie tally.so 2FORM_SESSION_#Used to implement forms on the website.Maximum Storage Duration: PersistentType: HTML Local Storage RESPONDENTUsed to implement forms on the website.Maximum Storage Duration: PersistentType: HTML Local Storage Preferences 2- [x] Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in. Vimeo 2Learn more about this providerplayerSaves the user's preferences when playing embedded videos from Vimeo.Maximum Storage Duration: 1 yearType: HTTP Cookie sync_activeContains data on visitor's video-content preferences - This allows the website to remember parameters such as preferred volume or video quality. The service is provided by Vimeo.com.Maximum Storage Duration: PersistentType: HTML Local Storage Statistics 4- [x] Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously. Google 2Learn more about this providerSome of the data collected by this provider is for the purposes of personalization and measuring advertising effectiveness. _gaUsed to send data to Google Analytics about the visitor's device and behavior. Tracks the visitor across devices and marketing channels.Maximum Storage Duration: 2 yearsType: HTTP Cookie ga#Used to send data to Google Analytics about the visitor's device and behavior. Tracks the visitor across devices and marketing channels.Maximum Storage Duration: 2 yearsType: HTTP Cookie Vimeo 2Learn more about this provider_cfuvidThis cookie is a part of the services provided by Cloudflare - Including load-balancing, deliverance of website content and serving DNS connection for website operators. Maximum Storage Duration: SessionType: HTTP Cookie vuidCollects data on the user's visits to the website, such as which pages have been read.Maximum Storage Duration: 2 yearsType: HTTP Cookie Marketing 0- [x] Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers. We do not use cookies of this type. Unclassified 1 Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies. Azure 1Learn more about this providerGS_FLOW_CONTROLPendingMaximum Storage Duration: SessionType: HTTP Cookie Cross-domain consent[#BULK_CONSENT_DOMAINS_COUNT#] [#BULK_CONSENT_TITLE#] List of domains your consent applies to: [#BULK_CONSENT_DOMAINS#] Cookie declaration last updated on 9/2/25 by Cookiebot [#IABV2_TITLE#] [#IABV2_BODY_INTRO#] [#IABV2_BODY_LEGITIMATE_INTEREST_INTRO#] [#IABV2_BODY_PREFERENCE_INTRO#] [#IABV2_LABEL_PURPOSES#] [#IABV2_BODY_PURPOSES_INTRO#] [#IABV2_BODY_PURPOSES#] [#IABV2_LABEL_FEATURES#] [#IABV2_BODY_FEATURES_INTRO#] [#IABV2_BODY_FEATURES#] [#IABV2_LABEL_PARTNERS#] [#IABV2_BODY_PARTNERS_INTRO#] [#IABV2_BODY_PARTNERS#] About Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies we need your permission. This site uses different types of cookies. Some cookies are placed by third party services that appear on our pages. You can at any time change or withdraw your consent from the Cookie Declaration on our website. Learn more about who we are, how you can contact us and how we process personal data in our Privacy Policy. Please state your consent ID and date when you contact us regarding your consent. [x] Do not sell or share my personal information Use necessary cookies only Allow selection Customize Allow all cookies species (0)site (0)country (0)flyway (0) See All results Insights Resources PublicationsToolsPolicyGlobally Threatened Bird Forums About our science Our science The IUCN Red ListTaxonomyIBAsKBAsFlywaysTerms & definitions Contact Request our dataContact us Visit BirdLife.orgData Explorer Bar-tailed Godwit Jump to section © Image © Leo Berzins Bar-tailed Godwit Limosa lapponica Scolopacidae (Sandpipers, Snipes, Phalaropes) Number of mature individuals - Population trend Decreasing NT IUCN Red List category Near Threatened Image © Leo Berzins Contents IUCN Red List assessment DistributionPopulationEcologyThreatsUse and tradeConservation actionAssessment historyTaxonomyReferencesCredits & acknowledgements Sub-global Red List assessments Sites Key policy instruments Other resources Citations SHARE FACTSHEET RECOMMENDED CITATIONS At a glance Extent of occurrence 9,050,000 km² Migratory status Full Migrant Breeding endemic No IBAs/KBAs identified for this species IUCN Red List assessment Year of last assessment 2016 IUCN Red List category EX EW CR EN VU NT Near Threatened LC DD Criteria:A2abc+3bc+4abc Justification for Red List category This species has an extremely large range and consists of several subpopulations using different flyways. The lapponica subspecies which breeds and winters within Europe is thought to be experiencing an increase in the wintering population but the breeding trend is unknown. Of the taymyrensis subspecies which breeds in Siberia, the population wintering in west and south-west Africa is estimated to be declining whilst the trend for the population wintering in south and south-west Asia and east Africa is not known. Two subspecies, menzbieri and baueri, use the East Asian-Australasian Flyway and are both undergoing extremely rapid declines, probably owing to severe habitat loss in the Yellow Sea. As a result of severe declines in populations using the East Asian-Australasian Flyway the species has been uplisted to Near Threatened; it almost meets the requirements for listing as threatened under criteria A2abc+3bc+4abc. Distribution Range i © OpenStreetMap contributors. +- expand LEGEND LEGEND Extent of Occurrence (EOO) 9,050,000 km² Continuing decline in EOO Unknown Area of Occupancy (AOO) Unknown Continuing decline in AOO Unknown Breeding endemic No Range description The species breeds across the Arctic from northern Europe through Siberia to Alaska (U.S.A.), wintering along the coasts of western Europe, Africa, the Middle East, south- and south-east Asia, Australia and New Zealand. L. l. lapponica breeds in northern parts of Fennoscandia east through the Kola and Kanin Peninsulas (Russia) and winters on the coasts of Africa, east to the Persian Gulf and west India. L. l. taymyrensis breeds in north-west and north-central Siberia from the Yamal Peninsula to the River Anabar basin, it winters on the coasts of Africa east to the Persian Gulf and west India. A large proportion of the taymyrensis population winters at Bar al Hikman, Oman (87,187 individuals in December 2013 [de Fouw in litt. to Wetlands International]).L. l. menzbieri breeds in northern Siberia between the Lena Delta and Chaunskaya Bay, wintering from south-east Asia to north-west Australia.L. l. baueri breeds from north-east Siberia (east of Chaunskaya Bay) to west and north Alaska, wintering from China to Australia, New Zealand and some south-west Pacific islands. L. l. anadyrensis breeds in east Siberia (Chukotka and Anadyr lowlands); wintering ares uncertain, but are potentially in Australia and New Zealand (Van Gils and Wiersma 1996). Number of locations Unknown Continuing decline in number of locations Unknown Severely fragmented No Very restricted AOO/number of locations No Country/territory distribution | ##### Country/territory | ##### Presence | ##### Origin | ##### Seasonality | --- --- | | | | | | | | | | | | L o a d i n g... | | | | | | | | | | | Population Number of mature individuals - Year of estimate 2012 Data quality Poor Derivation Estimated Number of subpopulations - All individuals in one subpopulation No Population information The global population is estimated to number c. 1,099,000-1,149,000 individuals (Wetlands International 2017).The European breeding population is estimated at 3,700-9,000 pairs, which equates to 7,400-18,000 mature individuals (BirdLife International 2015). The East Asian-Australasian Flyway population has been estimated at 325,000 individuals (Hansen et al. 2016). Current population trend Decreasing Derivation Suspected A2: Past (suspected) population change 20-29%A3: Future (suspected) population change 20-29%A4: Past & future (suspected) population change (years 2003 - 2030)20-29% Continuing decline in mature individuals Unknown Extreme fluctuations in mature individuals No Extreme fluctuations in subpopulations No Continuing decline in number of subpopulations Unknown Trend justification The overall population trend is thought to be decreasing, although some populations may be stable and others have unknown trends (Wetlands International 2015).In Europe (lapponica population)the breeding population trend is unknown however the wintering population trend is increasing (BirdLife International 2015). In West Africa (taymyrensis population) the population decreased between 2003 and 2014 and between 1979 and 2014 (745,803 wintering birds in 1980s, 516,920 in the 1990s and 497,433 in the 2010s) (van Roomen et al. 2014). In East Africa (taymyrensis population) the population trend is unknown (Wetlands International 2015). Approximately 27-28% of the global population uses the East Asian-Australasian Flyway (menzbieri and baueri populations) and there is considerable concern that loss of intertidal stopover habitat in the Yellow Sea region of East Asia is driving population declines in shorebirds (Amano et al. 2010, Yang et al. 2011).Both the menzbieri and baueri populations have experienced strong declines (declines of 79.1% and 30.2% over three generations) according to monitoring data from around Australia and New Zealand (Studds et al. in prep.). An analysis of survival rates provides additional evidence for declines in the menzbieri population. Survival of the species remained relatively high in north-west Australia, however during time away from Australia the population began to decline in 2011, with an annual survival rate of 0.71 between 2011 and 2012 (Piersma et al. 2016). Given the low survival, the study suggests that the population will halve within four years. Recent data also suggests that the baueri population may decline by 44% within 10 years (Conklin et al. 2016). Ecology Habitat and ecology The species breeds in marshy, swampy areas in lowland moss and shrub tundra (Johnsgard 1981, Flint et al. 1984, del Hoyo et al. 1996) near wet river valleys (Johnsgard 1981), lakes and sedge bogs (Flint et al. 1984), as well as on swampy heathlands in the willow and birch zone near the Arctic treeline (Johnsgard 1981), in open larch Larix spp. woodland close to water (del Hoyo et al. 1996), and occasionally on open bogs in the extreme north of the coniferous forest zone (Johnsgard 1981).The nest is a depression positioned on a dry elevated site (del Hoyo et al. 1996) such as a tundra ridge (Johnsgard 1981) or hummock (Flint et al. 1984), often between clumps of grass (del Hoyo et al. 1996) or under a thicket (Flint et al. 1984).On passage the species may frequent inland wetlands (Hayman et al. 1986), sandy beaches with pine Pinus spp. stands, swampy lowlands near lakes (Flint et al. 1984) and short-grass meadows, but during the winter it is more common in intertidal areas along muddy coastlines, estuaries, inlets, mangrove-fringed lagoons and sheltered bays (del Hoyo et al. 1996) with tidal mudflats or sandbars (Johnsgard 1981). When breeding the species feeds on insects, annelid worms, molluscs and occasionally seeds and berries (del Hoyo et al. 1996).In intertidal areas the species's diet consists of annelids (e.g.Nereis spp. and Arenicola spp.), bivalves and crustaceans, although it will also take cranefly larvae and earthworms on grasslands and occasionally larval amphibians (tadpoles) and small fish (del Hoyo et al. 1996).This species is a full long-distance migrant, with satellite data showing western Alaskan individuals can travel >11,000 km. to New Zealand without stopping (Gill et al. 2009). Ecosystems Marine Freshwater Terrestrial Migratory status Full Migrant Elevation 0 - 440 m Occasional elevation limits - Generation length 8.9 years Species type Waterbird Habitat classification | ##### Habitat (level 1) | ##### Habitat (level 2) | ##### Season | ##### Importance | --- --- | | Marine Intertidal | Sandy Shoreline and/or Beaches, Sand Bars, Spits, Etc | Non-breeding | Major | | Marine Neritic | Coral Reef - Back Slope | Non-breeding | Suitable | | Marine Neritic | Coral Reef - Foreslope (Outer Reef Slope) | Non-breeding | Suitable | | Marine Neritic | Coral Reef - Inter-Reef Rubble Substrate | Non-breeding | Suitable | | Marine Neritic | Coral Reef - Inter-Reef Soft Substrate | Non-breeding | Suitable | | Marine Neritic | Coral Reef - Lagoon | Non-breeding | Suitable | | Marine Neritic | Coral Reef - Outer Reef Channel | Non-breeding | Suitable | | Marine Neritic | Estuaries | Non-breeding | Major | | Wetlands (inland) | Permanent Freshwater Lakes (over 8ha) | Non-breeding | Suitable | | Wetlands (inland) | Permanent Rivers/Streams/Creeks (includes waterfalls) | Breeding | Suitable | | Wetlands (inland) | Seasonal/Intermittent Freshwater Lakes (over 8ha) | Non-breeding | Suitable | | Wetlands (inland) | Tundra Wetlands (incl. pools and temporary waters from snowmelt) | Breeding | Major | Threats Threats impacting the species Threats on the breeding grounds include oil and gas exploration and associated infrastructure development, legal subsistence harvesting and illegal hunting, and increases in predator numbers (Brown et al. 2014). Climate change has the potential to affect vegetation and the extent of suitable breeding habitat (P. Battley in litt. 2016). The species is threatened by the degradation of stopover and non-breeding sites due to land reclamation, shellfisheries, pollution, human disturbance, reduced river flows,and in some areas the invasion of mudflats and coastal saltmarshes by mangroves (owing to sea-level rise and increased sedimentation and nutrient loads at the coast from uncontrolled development and soil erosion in upstream catchment areas) (del Hoyo et al. 1996, Kelin and Qiang 2006, Straw and Saintilan 2006, Melville et al. 2016).Anthropogenic nutrient enrichment of wetland areas at non-breeding sites can also cause cyanobacterium blooms that may impact this species's prey species (Estrella et al. 2011). Loss of intertidal stopover habitats due to reclamation activities in the Yellow Sea region of the East Asian-Australasian Flyway is thought to be driving declines in shorebird populations (Amano et al. 2010, Yang et al. 2011, Leyrer et al. 2014, Choi et al. 2015, Melville et al. 2016, Piersma et al. 2016). It is estimated that up to 65% of tidal flats in the Yellow Sea region have been lost over the past five decades, with an annual loss of 1.2% per year since the 1980s (Murray et al. 2014), and the Republic of Korea having lost 75% of its mudflats by 2010 (Moores et al. 2016).These losses are attributed to urban, industrial and agricultural expansion within the region. Development elsewhere in the species's range is also considered a threat to important habitat: potential oil and gas extraction activities and commercial and industrial development threaten staging and non-breeding grounds in West Africa, the Middle East and the Wadden Sea, while rapid residential, commercial and industrial development in West Africa and the Middle East threatens staging and wintering grounds (Leyrer et al. 2014). Although not currently detected in certain populations (e.g. in Australia [Curran et al. 2014]), the species has also been susceptible to avian influenza in the past so may be threatened by future outbreaks of the virus (Melville and Shortridge 2006). This species may be hunted in some range states, with anecdotal evidence from individuals carrying transmitters being shot (P. Battley in litt. 2016, J. Szabo in litt. 2016). Read More Threat classification | ##### Threat (level 1) | ##### Threat (level 2) | ##### Timing | ##### Scope | ##### Severity | ##### Impact | ##### Stresses | --- --- --- | Agriculture & aquaculture | Annual & perennial non-timber crops - Agro-industry farming | Ongoing | Majority (50-90%) | Slow, Significant Declines | Low | Ecosystem degradation, Ecosystem conversion | | Agriculture & aquaculture | Marine & freshwater aquaculture - Industrial aquaculture | Ongoing | Minority (<50%) | Slow, Significant Declines | Low | Ecosystem degradation, Ecosystem conversion | | Biological resource use | Hunting & trapping terrestrial animals - Intentional use (species is the target) | Ongoing | Minority (<50%) | Negligible declines | Negligible | Species mortality | | Climate change & severe weather | Habitat shifting & alteration | Future | Majority (50-90%) | Unknown | | Ecosystem degradation, Ecosystem conversion | | Energy production & mining | Oil & gas drilling | Future | Minority (<50%) | Slow, Significant Declines | | Species disturbance, Ecosystem degradation, Ecosystem conversion | | Human intrusions & disturbance | Recreational activities | Ongoing | Whole (>90%) | Slow, Significant Declines | Medium | Species disturbance, Ecosystem degradation | | Invasive and other problematic species, genes & diseases | Invasive non-native/alien species/diseases - Spartina anglica | Ongoing | Minority (<50%) | Slow, Significant Declines | Low | Ecosystem degradation, Ecosystem conversion | | Invasive and other problematic species, genes & diseases | Problematic native species/diseases - Unspecified species | Ongoing | Minority (<50%) | Slow, Significant Declines | Low | Ecosystem degradation, Ecosystem conversion | | Invasive and other problematic species, genes & diseases | Viral/prion-induced diseases - Avian Influenza Virus (H5N1 subtype) | Past, Likely to Return | Unknown | Unknown | | Species mortality | | Natural system modifications | Dams & water management/use - Dams (size unknown) | Ongoing | Minority (<50%) | Slow, Significant Declines | Low | Ecosystem degradation, Ecosystem conversion | | Natural system modifications | Other ecosystem modifications | Ongoing | Whole (>90%) | Slow, Significant Declines | Medium | Ecosystem degradation, Ecosystem conversion | | Pollution | Industrial & military effluents - Oil spills | Ongoing | Whole (>90%) | Slow, Significant Declines | Medium | Ecosystem degradation | | Residential & commercial development | Commercial & industrial areas | Ongoing | Majority (50-90%) | Slow, Significant Declines | Low | Ecosystem degradation, Ecosystem conversion | | Residential & commercial development | Housing & urban areas | Ongoing | Minority (<50%) | Slow, Significant Declines | Low | Ecosystem degradation, Ecosystem conversion | Use and trade End uses | ##### Purpose | ##### Scale | --- | | Food - human | Subsistence, National | | Sport hunting/specimen collecting | Subsistence, National | Conservation action Conservation actions underway and needed Conservation and Research Actions Underway EU Birds Directive Annex I and II. CMS Appendix II. L. l. taymyrensis is listed in Column B, categories 2a and 2c of the AEWA Action Plan and L. l. lapponica is in Column B, category 2a (Leyrer et al. 2014).In 2014, four subspecies were proposed for listing for Cooperative Action under CMS (L. l.taymyrensis,menzbieri, anadyrensis and baueri) (Leyrer et al. 2014).The removal of Spartina anglica from tidal mudflats using a herbicide has shown benefits for the species (Evans 1986). Conservation and Research Actions Proposed By working with governments, protect remaining intertidal habitats across the species's range (including the Yellow Sea) to prevent further habitat loss and degradation (Van Gils and Wiersma 1996, Yang et al. 2011, Threatened Species Scientific Committee 2016), and try to restore or create new areas of suitable habitat (Threatened Species Scientific Committee 2016). Adequate protection and management of all important staging sites should be ensured. Incorporate requirements for this species into planning of coastal development (Threatened Species Scientific committee 2016). Legally protect the species in all range states (Leyrer et al. 2014). Sustainable fisheries in the Wadden Sea and other important European estuaries should be promoted. Continue to monitor the species and expand schemes to provide reliable population estimates. Conduct research to improve knowledge of threats.Particular focus should be given to the impact of pollutants, disturbance and hunting, increasing Red Fox Vulpes vulpes abundance, changes in lemming population cycles and the northward encroachment of scrub habitat (Brown et al. 2014, Threatened Species Scientific Committee 2016). Additionally, the impacts of climate change in the high Arctic should investigated. Use tracking technology to identify migration routes, key staging sites and timing of migration across its range (Leyrer et al. 2014, Threatened Species Scientific Committee 2016). Increase public awareness of the species and highlight the importance of key staging sites (Leyrer et al. 2014). Read More Conservation actions in place | ##### Action | ##### In place? | ##### Details | --- | Action recovery plan | No | Systematic monitoring scheme | Yes | Monitored in at least parts of its range by the International Waterbird Census (>10 records received in >50% of the years that the census has been running in the relevant region) | | Conservation sites identified | Yes, over entire range | Invasive species control or prevention | No | Harvest management plan - | | Successfully reintroduced/introduced benignly | No | Subject to ex-situ conservation | No | Subject to recent education and awareness programmes | No | Included in international legislation | Yes | EU Birds Directive Annex I and II. CMS Appendix II. | | Subject to any international management/trade controls | No Alliance for Zero Extinction species No Search for Lost Birds species No Conservation actions needed | ##### Action (level 1) | ##### Action (level 2) | ##### Action (level 3) | ##### Details | --- --- | | Education & awareness | Awareness & communications | | Increase public awareness of the species and highlight the importance of key staging sites (Leyrer et al. 2014). | | Land/water management | Habitat & natural process restoration | | Try to restore or create new areas of suitable habitat (Threatened Species Scientific Committee 2016). | | Land/water management | Site/area management | | Adequate protection and management of all important staging sites should be ensured. Incorporate requirements for this species into planning of coastal development (Threatened Species Scientific committee 2016). | | Land/water protection | Resource & habitat protection | | By working with governments, protect remaining intertidal habitats across the species's range (including the Yellow Sea) to prevent further habitat loss and degradation (Van Gils and Wiersma 1996, Yang et al. 2011, Threatened Species Scientific Committee 2016). | | Land/water protection | Site/area protection | | By working with governments, protect remaining intertidal habitats across the species's range (including the Yellow Sea) to prevent further habitat loss and degradation (Van Gils and Wiersma 1996, Yang et al. 2011, Threatened Species Scientific Committee 2016). | | Law & policy | Legislation | National level | Legally protect the species in all range states (Leyrer et al. 2014). | | Livelihood, economic & other incentives | Non-monetary values | | Sustainable fisheries in the Wadden Sea and other important European estuaries should be promoted. | Research needed | ##### Research (level 1) | ##### Research (level 2) | ##### Details | --- | Monitoring | Population trends | Continue to monitor the species and expand schemes to provide reliable population estimates. | | Research | Life history & ecology | Use tracking technology to identify migration routes, key staging sites and timing of migration across its range (Leyrer et al. 2014, Threatened Species Scientific Committee 2016). | | Research | Threats | Conduct research to improve knowledge of threats. Particular focus should be given to the impact of pollutants, disturbance and hunting, increasing Red Fox Vulpes vulpes abundance, changes in lemming population cycles and the northward encroachment of scrub habitat (Brown et al. 2014, Threatened Species Scientific Committee 2016). Additionally, the impacts of climate change in the high Arctic should be investigated. | Assessment history IUCN Red List assessment history Note that a change in IUCN Red List category does not necessarily indicate a genuine change in the status of the species, but may simply reflect improved knowledge of the species' status. | ##### Assessment year | ##### IUCN Red List Category | ##### Criteria | --- | 2016 | Near Threatened | A2abc+3bc+4abc | | 2015 | Near Threatened | A2abc+3bc+4abc | | 2012 | Least Concern | | | 2009 | Least Concern | | | 2008 | Least Concern | | | 2004 | Least Concern | | | 2000 | Lower Risk/Least Concern | | | 1994 | Lower Risk/Least Concern | | | 1988 | Lower Risk/Least Concern | | Taxonomy Order Charadriiformes Family Scolopacidae Authority (Linnaeus, 1758) Taxonomic sources Christidis, L. and Boles, W.E. 2008. Systematics and Taxonomy of Australian Birds. CSIRO Publishing, Collingwood, Australia. Cramp, S. and Simmons, K.E.L. (eds). 1977-1994. Handbook of the birds of Europe, the Middle East and Africa. The birds of the western Palearctic. Oxford University Press, Oxford. AERC TAC. 2003. AERC TAC Checklist of bird taxa occurring in Western Palearctic region, 15th Draft. Available at: SACC. 2005 and updates. A classification of the bird species of South America. Available at: Turbott, E.G. 1990. Checklist of the Birds of New Zealand. Ornithological Society of New Zealand, Wellington. del Hoyo, J., Collar, N.J., Christie, D.A., Elliott, A. and Fishpool, L.D.C. 2014. HBW and BirdLife International Illustrated Checklist of the Birds of the World. Volume 1: Non-passerines. Lynx Edicions BirdLife International, Barcelona, Spain and Cambridge, UK. References Amano, T.; Szekely, T.; Koyama, K.; Amano, H.; Sutherland, W. J. 2010. A framework for monitoring the status of populations: an example from wader populations in the East Asian-Australasian flyway. Biological Conservation 143: 2238-2247. BirdLife International. 2015. European Red List of Birds. Office for Official Publications of the European Communities, Luxembourg. Brown, D., Crockford, N., and Sheldon, R. 2014. Drivers of population change and conservation priorities for the Numeniini populations of the world. RSPB and IWSG. Choi, C.-Y., Battley, P.F., Potter, M.A., Rogers, K.G. and Ma, Z. 2015. The importance of Yalu Jiang coastal wetland in the north Yellow Sea to Bar-tailed Godwits Limosa lapponica and Great Knots Calidris tenuirostris during northward migration. Bird Conservation International 25(1): 53-70. Conklin, J. R.; Lok, T.; Melville, D. S.; Riegen, A. C.; Schuckard, R.; Piersma, T.; Battley, P. F. 2016. Declining adult survival of New Zealand Bar-tailed Godwits during 2005–2012 despite apparent population stability. Emu 116: 147-157. Curran, J. M.; Ellis, T. M.; Robertson, I. D. 2014. Surveillance of Charadriiformes in Northern Australia shows species variations in exposure to Avian Influenza Virus and suggests negligible virus prevalence. Avian Diseases 58: 199-204. Estrella, S. M.; Storey, A. W.; Pearson, G.; Piersma, T. 2011. Potential effects of Lyngbya majuscula blooms on benthic invertebrate diversity and shorebird foraging ecology at Roebuck Bay, Western Australia: preliminary results. Journal of the Royal Society of Western Australia 94: 171-179. Evans, P. R. 1986. Use of the Herbicide 'Dalapon' for Control of Spartina Encroaching on Intertidal Mudflats: Beneficial Effects on Shorebirds. Colonial Waterbirds 9(1): 171-175. Flint, V.E.; Boehme, R.L.; Kostin, Y.V.; Kuznetsov, A.A. 1984. A field guide to birds of the USSR. Princeton University Press, Princeton, New Jersey. Gill, R. E. J.; Tibbitts, T. L.; Douglas, D. C.; Handel, C. M.; Mulcahy, D. M.; Gottschalck, J. C.; Warnock, N.; McCaffery, B. J.; Battley, P. F.; Piersma, T. 2009. Extreme endurance flights by landbirds crossing the Pacific Ocean: ecological corridor rather than barrier? Proc. R. Soc. Lond. B 276: 447-457. Hansen, B. D.; Fuller, R. A.; Watkins, D.; Rogers, D. I.; Clemens, R. S.; Newman, M.; Woehler, E. J.; Weller, D. R. 2016. Revision of the East Asian-Australasian Flyway Population Estimates for 37 listed Migratory Shorebird Species. Unpublished report for the Department of the Environment. BirdLife Australia, Melbourne. Hayman, P.; Marchant, J.; Prater, A. J. 1986. Shorebirds. Croom Helm, London. Johnsgard, P. A. 1981. The plovers, sandpipers and snipes of the world. University of Nebraska Press, Lincoln, U.S.A. and London. Kelin, C.; Qiang, X. 2006. Conserving migratory shorebirds in the Yellow Sea region. In: Boere, G.; Galbraith, C., Stroud, D. (ed.), Waterbirds around the world, pp. 319. The Stationery Office, Edinburgh, UK. Leyrer, J., van Nieuwenhove, N., Crockford, N. and Delany, S. 2014. Proposals for Concerted and Cooperative Action for Consideration by CMS COP 11, November 2014. BirdLife International and International Wader Study Group. Melville, D. S.; Chen, Y., Ma, Z. 2016. Shorebirds along the Yellow Sea coast of China face an uncertain future – a review of threats. Emu 116: 100-110. Melville, D.S. and Shortridge, K.F. 2006. Migratory waterbirds and avian influenza in the East Asian-Australasian Flyway with particular reference to the 2003-2004 H5N1 outbreak. In: G. Boere, C. Galbraith and D. Stroud (eds), Waterbirds around the world, pp. 432-438. The Stationery Office, Edinburgh, U.K. Moores, N.; Rogers, D. I.; Rogers, K. G.; Hansbro, P. M. 2016. Tidal-flat Reclamation and Shorebird Declines in Saemangeum and the Republic of Korea. Emu 116: 136-146. Murray, N.J., Clemens, R.S., Phinn, S.R., Possingham, H.P. and Fuller, R.A. 2014. Tracking the rapid loss of tidal wetlands in the Yellow Sea. Frontiers in Ecology and the Environment 12: 267-272. Piersma, T.; Lok, T.; Chen, Y.; Hassell, C. J.; Yang, H.-Y.; Boyle, A.; Slaymaker, M.; Chan, Y.-C.; Melville, D. S.; Zhang, Z.-W.; Ma, Z. 2016. Simultaneous declines in summer survival of three shorebird species signals a flyway at risk. J. App. Ecol. 53: 479-490. Straw, P.; Saintilan, N. 2006. Loss of shorebird habitat as a result of mangrove incursion due to sea-level rise and urbanization. In: Boere, G.; Galbraith, C., Stroud, D. (ed.), Waterbirds around the world, pp. 717-720. The Stationary Office, Edinburgh, UK. Studds, C.E. et al. in prep.. Dependence on the Yellow Sea predicts population collapse in a migratory flyway. Threatened Species Scientific Committee. 2016. Conservation Advice Limosa lapponica menzbieri Bar-tailed godwit (northern Siberian). Australian Government. Department of the Environment and Energy. Van Gils, J. and Wiersma, P. 1996. Bar-tailed Godwit (Limosa lapponica). In: del Hoyo, J., Elliott, A., Sargatal, J., Christie, D.A. and de Juana, E. (eds), Handbook of the Birds of the World Alive, Lynx Edicions, Barcelona. Wetlands International. 2017. Waterbird Population Estimates. Available at: (Accessed: 13/02/2017). Yang, H.Y., Chen, B., Barter, M., Piersma, T., Zhou, C-F., Li, F-S. and Zhang, Z-W. 2011. Impacts of tidal land reclamation in Bohai Bay, China: ongoing losses of critical Yellow Sea waterbird staging and wintering sites. Bird Conservation International 21: 241-259. del Hoyo, J., Elliott, A., and Sargatal, J. 1996. Handbook of the Birds of the World, vol. 3: Hoatzin to Auks. Lynx Edicions, Barcelona, Spain. van Roomen, M., van Winden, E. and Langendoen, T. 2014. The assessment of trends and population sizes of a selection of waterbird species and populations from the coastal East Atlantic Flyway for Conservation Status Report 6 of The African Eurasian Waterbird Agreement. Read More Credits & acknowledgements Assessor(s) BirdLife International Compiler(s) Malpas, L., Westrip, J., Symes, A., Ekstrom, J., Ashpole, J, Butchart, S. Reviewer(s) Butchart, S., Symes, A. BirdLife Species Champion(s) - BirdLife Species Guardian(s) - Contributor(s) Battley, P., Alaskan Shorebird Group, Balachandran, S., Melville, D., Szabo, J., Nagy, S., van Roomen, M., Meltofte, H. Sub-global Red List assessments | ##### Geographic scope | ##### Year | ##### Category | ##### Criteria | ##### Assessment | ##### Supplementary information | --- --- --- | | EU27 + UK | 2021 | LC | N/A | View assessment | Download PDF | | Europe | 2021 | LC | N/A | View assessment | Download PDF | | Australia | 2021 | EN | A2bce+3ce+4bce | View assessment | N/A | Sites Important sites (IBAs/KBAs) identified for the species i © OpenStreetMap contributors. +- expand LEGEND LEGEND IBAs/KBAs identified for this species Area of IBA/KBA network Average protected area coverage of IBAs/KBAs | ##### Country/territory | ##### IBA/KBA name | ##### % Protected area/OECM | ##### Last assessed | ##### Criteria met | --- --- | | | | | | | | | | | | | L o a d i n g... | | | | | | | | | | | | | Proportion of IBAs/KBAs identified for the species overlapping with protected areas/OECMs Key policy instruments Policy instruments relating to species Please note that this is not a comprehensive list of policy instruments relating to this species - only a selection of key instruments are shown. | ##### Policy instrument | ##### Listing | ##### Number of national Parties/Signatories | --- | CMS | Appendix II | 132 | | AEWA | Annex II | 84 | Other resources Visit the Conservation Evidence website. | ##### Actions | ##### Studies | --- | | | | | | | | L o a d i n g... | | | | | | | Citations Recommended citation: BirdLife International (2016). Species factsheet: Bar-tailed Godwit Limosa lapponica.Downloaded from 28/09/2025 Recommended citation for assessments for more than one species: BirdLife International (2025) IUCN Red List for birds.Downloaded from 28/09/2025 Support our science We rely on donations to keep this service running and help birds thrive around the world. Please consider donating today. Donate now Quick links Data Explorer Insights Publications Conservation tools Policy About our science The IUCN Red List IBAs KBAs Flyways Terms and definitions BirdLife International Contact us Terms and Conditions Privacy policy Cookie policy Acknowledgements Quick links Data Explorer Insights Publications Conservation tools Policy About our science The IUCN Red List IBAs KBAs Flyways Terms and definitions BirdLife International Contact us Terms and Conditions Privacy policy Cookie policy Acknowledgements Quick links Data Explorer Insights Publications Conservation tools Policy BirdLife International Contact us Terms and Conditions Privacy policy Cookie policy Acknowledgements Registered charity 1042125 Funded by the Garfield Weston Foundation & the Aage V. Jensen Charity Foundation Copyright © 2024—2025 BirdLife International. All Rights Reserved. Website byHex Digital/Trifork
188814
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Conjugation/Conjugated_Dienes
Skip to main content Conjugated Dienes Last updated : Jan 23, 2023 Save as PDF Conjugation Electrophilic Attack on Conjugated Dienes Page ID : 802 ( \newcommand{\kernel}{\mathrm{null}\,}) A diene is a hydrocarbon chain that has two double bonds that may or may not be adjacent to each other. This section focuses on the delocalization of pi systems by comparing two neighboring double bonds. The arrangements of these double bonds can have varying affects on the compounds reactivity and stability. Naming Dienes First identify the longest chain containing both carbons with double bonds in the compound. Then give the lowest possible number for the location of the carbons with double bonds and any other functional groups present (remember when naming alkenes that some groups take priority such as alcohols). Do not forget stereochemistry or any other orientation of the double bond such as (E/Z,cis or trans). Examples: Conjugated vs. Nonconjugated vs. Cumulated Dienes Conjugated dienes are two double bonds separated by a single bond Nonconjugated (Isolated) Dienes are two double bonds are separated by more than one single bond. Cumulated Dienes are two double bond connected to a similar atom. When using electrostatic potential maps, it is observed that the pi electron density overlap is closer together and delocalized in conjugated dienes, while in non conjugated dienes and cummulated dienes the pi electron density is located differently across the molecule. Since having more electron density delocalized makes the molecule more stable conjugated dienes are more stable than non conjugated and cummulated dienes. Stability of Conjugated Dienes Conjugated dienes are more stable than non conjugated dienes (both isolated and cumulated) due to factors such as delocalization of charge through resonance and hybridization energy. This can also explain why allylic radicals are much more stable than secondary or even tertiary carbocations. This is all due to the positioning of the pi orbitals and ability for overlap to occur to strengthen the single bond between the two double bonds. The resonance structure shown below gives a good understanding of how the charge is delocalized across the four carbons in this conjugated diene. This delocalization of charges stablizes the conjugated diene: Along with resonance, hybridization energy effect the stability of the compound. For example in 1,3-butadiene the carbons with the single bond are sp2 hybridized unlike in nonconjugated dienes where the carbons with single bonds are sp3 hybridized. This difference in hybridization shows that the conjugated dienes have more 's' character and draw in more of the pi electrons, thus making the single bond stronger and shorter than an ordinary alkane C-C bond (1.54Å). Another useful resource to consider are the heats of hydrogenation of different arrangements of double bonds. Since the higher the heat of hydrogenation the less stable the compound, it is shown below that conjugated dienes (~54 kcal) have a lower heat of hydrogenation than their isolated (~60 kcal) and cumulated diene (~70 kcal) counterparts. Here is an energy diagram comparing differnt types of bonds with their heats of hydrogenation to show relative stability of each molecule: Different conformations of Conjugated Dienes There are two different conformations of conjugated dienes which are s-cisand s-trans conformations. s-cis is when the double bonds are cis in reference to the single bond and s-trans is when the two double bonds are trans in reference to the single bond. The cis conformation is less stable due to the steric interation of hydrogens on carbon. One important use of the cis conformation of a conjugated diene is that it is used diels-alder cycloaddition reactions. Even though the trans conformation is more stable the cis conformation is used because of the molecule's ability to interconvert and rotate about the single bond. Molecular Orbitals The four pi electrons from the two double bonds are placed in the bonding orbitals with no nodes (2 electrons) and one node (2 electrons). The orbital with the Highest occupied molecular orbital (HOMO) is used in cycloaddition reactions as dienophiles. Here is a clear representation of what is going on in the Highest occupied molecular orbital of the conjugated diene: References Vollhardt, Peter, and Neil E. Schore. Organic Chemistry: Structure and Function. 5th Edition. New York: W. H. Freeman & Company, 2007. Johnson, William. Invitation to Organic Chemistry. Jones & Bartlett Publishers, 1998. Sorrell, Thomas N. Organic Chemistry. University Science Books, 2006. Problems 1) Draw the following: 1,3-cycloheptadiene 2) Name the following: 3) Draw both s-cis and s-trans conformations of 1,3-pentadiene and state which is more stable and why? 4) What do you think is more stable trans-1,3-pentadiene or 1,4-pentadiene? and why? 5) Draw the molecular orbital for 1,3-butadiene Answers 1) 2) (Z)-4-Chloro-1,3-hexadiene 3) s-trans is more stable because there is less steric interaction between the methyl and hydrogens unlike in s-cis. 4) trans-1,3-pentadiene because it is a conjugated diene, which is more stable than an isolated diene such as 1,4-pentadiene. This is due do many reasons such as resonance energy and hybridization. Contributors Shravan Rao Conjugation Electrophilic Attack on Conjugated Dienes
188815
https://www.youtube.com/watch?v=fm7UEyEYo44
How to Balance Ca + HNO3 = Ca(NO3)2 + H2 (Calcium + Nitric acid) Wayne Breslyn (Dr. B.) 890000 subscribers 153 likes Description 28907 views Posted: 15 Sep 2018 In this video we will balance the equation Ca + HNO3 = Ca(NO3)2 + H2 and provide the correct coefficients for each compound. To balance Ca + HNO3 = Ca(NO3)2 + H2 you will need to be sure to count all of atoms on each side of the chemical equation. Once you know how many of each type of atom you can only change the coefficients (the numbers in front of atoms or compounds) to balance the equation for Calcium + Nitric acid . Important tips for balancing chemical equations: Only change the numbers in front of compounds (the coefficients). Never change the numbers after atoms (the subscripts). The number of each atom on both sides of the equation must be the same for the equation to be balanced. For a complete tutorial on balancing all types of chemical equations, watch my video: Balancing Equations in 5 Easy Steps: More Practice Balancing: Drawing/writing done in InkScape. Screen capture done with Camtasia Studio 4.0. Done on a Dell Dimension laptop computer with a Wacom digital tablet (Bamboo). 19 comments Transcript: in this video we'll balance the equation ca plus hno3 gives us a ca no32 plus h2 let's count the atoms up we have one calcium one hydrogen and then because i have this nitrate ion this no3 here and then i have no three over here i'm just going to say this is one i have one no3 put it like that that's going to help us later over on the product side i have one calcium two hydrogens and then my no3 i have two no3 so i can just put a 2 here now when i look at my coefficients i can see i have 2 hydrogens on the product side 1 here 2 of these nitrate ions and 1 here so all i need to do is put a coefficient of two in front of the nitric acid the hno3 and this two applies to everything so i have one hydrogen times the two that will give me two hydrogens those are balanced and then i have one nitrate times the two that gives me two of these no3's and those are balanced and with that i'm done this equation is balanced this is dr b with calcium plus nitric acid and thanks for watching
188816
https://arxiv.org/pdf/1910.00798
arXiv:1910.00798v3 [math.MG] 1 Feb 2020 The right acute angles problem? Andrey Kupavskii ∗ , Dmitriy Zakharov † Abstract The Danzer–Gr¨ unbaum acute angles problem asks for the largest size of a set of points in Rd that determines only acute angles. There has been a lot of progress recently due to the results of the second author and of Gerencs´ er and Harangi, and now the problem is essentially solved. In this note, we suggest the following variant of the problem, which is one way to “save” the problem. Let F (α) = lim d→∞ f (d, α )1/d , where f (d, α ) is the largest number of points in Rd with no angle greater than or equal to α. Then the question is to find c := lim α→π/ 2− F (α). It is an intriguing question whether c is equal to 2 as one may expect in view of the result of Gerencs´ er and Harangi. In this paper we prove the lower bound c > √2. We also solve a related problem of Erd˝ os and F¨ uredi on the “stability” of the acute angles problem and refute another conjecture stated in the same paper. 1 Introduction A set of points X ⊂ Rd is called acute (non-obtuse ) if any three points from X form an acute (acute or right, respectively) triangle. In 1962, Danzer and Gr¨ unbaum [DG] confirmed a conjecture of Erd˝ os from 1957 that any non-obtuse set of points in Rd has cardinality at most 2 d, moreover, the only examples of non-obtuse sets of cardinality 2 d are the hypercube and some of its affine images. They then modified the question and asked to determine the maximum size f (d) of an acute set in Rd for any d > 2. Danzer and Gr¨ unbaum obtained the first bounds on f (d): 2d − 1 6 f (d) 6 2d − 1, (1) where the upper bound immediately follows from the aforementioned result on non-obtuse sets. They conjectured that the lower bound is tight. As it turned out recently, the value of f (d) is actually very close to the upper bound in (1). While the only improvement upon the upper bound in (1) made so far is the inequality f (3) 6 5 proved in [C], there were numerous improvements for the lower bound. The only ∗ Moscow Institute of Physics and Technology, IAS Princeton ; Email: kupavskii@ya.ru Research supported by the grant of the Russian Government N 075-15-2019-1926. † Higher School of Economics, Email: s18b1 zakharov@179.ru 1values of f (d) that are known at the moment are f (2) = 3 and f (3) = 5, and the latter is the only known improvement of the upper bound (1), due to Croft [C]. In 1983, Erd˝ os and F¨ uredi [EF] provided a probabilistic construction of an acute set with [1 2 ( 2√3 )d] points, thus disproving the conjecture of Danzer and Gr¨ unbaum. The underlying idea was to consider a random subset of the vertices of the hypercube {0, 1}d (see the next section for details). In the years 1983-2009, the improvements of the lower bound were very moderate: the constant 1 2 in front of the exponential term ( 2√3 )d was improved in several steps, resulting in the inequality f (d) & 0.942 · ( 2√3 )d [B, Bu]. In 2009, Ackerman and Ben-Zwi [AB] improved the Erd˝ os–F¨ uredi bound by a factor of c√d using a certain general result concerning the independence numbers of sparse hypergraphs. In 2001, Harangi [H] made the first exponential improvement: the constant 2√3 ≈ 1.155 was replaced by ( 144 23 )0.1 ≈ 1.201. Harangi’s idea was to consider random subsets of the set of the form Xn 0 ⊂ Rd0n, rather than {0, 1}d, as it was done in the proof by Erd˝ os and F¨ uredi. Here, X0 ⊂ Rd0 is a low-dimensional acute set, which is typically constructed by hand or with the help of computer. For example, if one takes X0 to be an acute triangle on the plane then one gets the bound f (d) & 1.158 d, which is slightly better than the Erd˝ os–F¨ uredi bound. Harangi used a 12-point acute subset of R5 in his proof. The next round of development was triggered in the spring of 2017, when the first explicit exponential acute sets were constructed by the second author [Z]. The obtained bound on f (d) was also much better than the previously known ones: f (d) > Fd+1 ≈ 1.618 d, where Fd is the d-th Fibonacci number. 1 The proof used induction and certain slight perturbations of the point set to make the right angles in the arising product-type constructions acute. In the fall of 2017 Gerencs´ er and Harangi [GH] proved that f (d) > 2d−1 + 1 . (2) The proof was inspired by constructions of 9-point and 17-point acute sets in R4 and R5,respectively, made by an Ukranian mathematics enthusiast. The idea of Gerencs´ er and Harangi’s bound is to carefully perturb the vertices of the hypercube {0, 1}d−1 using one extra dimension to get rid of all right angles. One extra point can then be added to the construction. One common feature of all known explicit exponential-sized constructions is that the largest angle among the points is just barely smaller than π 2 , and the constructions break down completely if we require the largest angle to be, say π 2 − 0.001. On the other hand, as we shall see below, random constructions can be usually modified so that the largest angle would be separated from π 2 . This suggests a certain interesting direction for research, but let us first introduce a couple of definitions. Definition 1. Denote by f (d, α ) the size of the largest set of points in Rd with no three points forming an angle at least α. Put F (α) := lim sup d→∞ f (d, α )1/d . (3) 1Here F0=F1= 1. 2Thus, for instance, f (d) = f (d, π 2 ), and the result of Gerencs´ er–Harangi now implies that F (π 2 ) = 2. In [Kup], the first author showed that lim α→π/ 2+ f (d, α ) = 2 d.Note that f (d, α ) is meaningful only for α ∈ [π 3 , π ] since f (d, α ) = 2 for any α 6 π 3 . Some further results about f (d, α ) for α close to π 3 or to π can be found in [EF]. Results of Erd˝ os–F¨ uredi [EF, Theorem 3.6] translate to the following: F ( π 3 + δ) ∈ [1 + δ2, 1 + 4 δ]. (4) In the range α > π 2 it turns out that f (d, α ) grows surprisingly fast. The following result is essentially due to Erd˝ os–F¨ uredi [EF, Theorem 4.3] but their formulation applies only to α close enough to π (note that the condition that n is sufficiently large is missing in the statement of [EF, Theorem 4.3]). Proposition 1. For any α ∈ (π 2 , π ) there are constants C, c > 1 such that for all sufficiently large d 2cd < f (d, α ) < 2Cd . (5) Note that Proposition 1 refutes Conjecture 2.13 from the very same paper [EF]. Now we can formulate our main question. Question 1. Is it true that lim α→π/ 2− F (α) = 2? (6) Equivalently, is it true that for any ε > 0 there is δ > 0 so that for any sufficiently large d there is a set X ⊂ Rd of cardinality at least (2 − ε)d such that any three points from X determine an angle less than π 2 − δ? Although the problem is very close to the acute angles problem, the current methods that use explicit constructions fail completely, and the gap between the bounds is still exponential. We prove the following lower bound in this paper. Theorem 1. We have lim α→π/ 2− F (α) > √2. (7) That is for every ε > 0 there exists δ > 0 such that for any sufficiently large d there is a set X ⊂ Rd of cardinality at least (√2 − ε)d determining only angles less than π 2 − δ. Our proof is a combination of the method of Erd˝ os–F¨ uredi with the recent construction of acute sets by Gerencs´ er–Harangi. The second result gives a non-trivial upper bound on F (α) for any α < π/ 2. Theorem 2. For α > 0 small enough we have F (π 2 − α) 6 2 − α2. 3Theorem 2 confirms a conjecture of Erd˝ os–F¨ uredi [EF, Conjecture 3.5]. The proof is a modification of the proof of the inequality f (d) 6 2d due to Danzer and Gr¨ unbaum. Namely, their proof is based on the observation that if X is an acute set and P = conv( X)is the convex hull of X then interiors of homothets P +x 2 , x ∈ X, are pairwise disjoint. Considering the volumes one easily obtains the bound |X| 6 2d. The idea behind the proof of Theorem 2 is to take two disjoint subsets A, C ⊂ X and consider sets of the form λ conv( A) + (1 − λ)c ⊂ conv( A ∪ C), where c ∈ C. One can show that these sets are pairwise disjoint provided (i) all the angles in X are less than π 2 − α and (ii) λ is chosen appropriately. One then obtains an inequality λdVol(conv A)|C| 6 Vol(conv A ∪ C). Lemma 1 implies that one can choose A and C in such a way that Vol(conv A) and Vol(conv A ∪ C) are almost the same and |C| is comparable to |X|, which completes the proof. 2 The proofs Sketch of the proof of Proposition 1. To prove the lower bound, we construct a set {v1, . . . , v m} of m > cd unit vectors in Rd such that the angle between any two of them lies in ( π 2 −ε, π 2 +ε), where 2 ε = α − π 2 . This can be done by taking a random subset on the unit sphere and applying a concentration inequality (see, for instance, [M, Chapter 14]). Now take a suffi-ciently large number λ and consider the set X = {vI = ∑ t∈I λtvt | I ⊂ [m]}. Note that |X| = 2 cd . For any two points vI , v J ∈ X we have vI − vJ ≈ ± λtvt, where t is the largest element of I∆J. So the angle between vI − vJ and vI − vK is approximately equal to the angle between some vectors ±vi and ±vj , and therefore, it is at most α.To prove the upper bound, we construct a set {v1, . . . , v m} of m 6 Cd vectors such that any vector determines an angle less than π−α 2 with one of them. This can be done by a greedy algorithm or deduced from known results for the sphere packing problem. Take a set X of more than 2 m points. For x, y ∈ X, color a pair ( x, y ), x 6 = y, in color i if the angle between vi and x − y is at most π−α 2 . In what follows, we show that, since |X| > 2m,there exists a triple x, y, z such that ( x, y ) and ( y, z ) received the same color (i.e., there is a monochromatic oriented 2-path). But then the angle between y − x and y − z is at least α.We show that such a triple exists by induction on m. The statement is clear for m = 1 and |X| = 3. Next, for m-colorings, take any color, say, red, and consider all edges of this color. If there is no red oriented 2-path, then each vertex either has only incoming or only outgoing red edges, and so red edges span a bipartite graph. (We are free to assign vertices with no incident red edge to any of the two parts.) Take the bigger part of this bipartite graph. It has size at least ⌈(2 m + 1) /2⌉ = 2 m−1 + 1 and is colored with m − 1 colors. Thus it contains a monochromatic oriented 2-path. Proof of Theorem 1. Fix an arbitrary ε > 0. Take a sufficiently large d0 and an acute set X0 ⊂ Rd0 of size 2 d0−1 + 1 (which exists by (2)). Let R > 0 be the diameter of X0 and denote by s the smallest scalar product 〈x − y, x − z〉 over all triples x, y, z ∈ X0 such that x 6 = y, z .By the definition of an acute set, we have s > 0. 4W.l.o.g., assume that d0 divides d. Let m = 2 1−ε 2nd 0 where n = d/d 0. Choose 2 m uniformly random points p1, . . . , p 2m ∈ Xn 0 ⊂ Rd0n, and set pi = ( pi1, . . . , p in ). Let us estimate the expectation of the number of triples ( i, j, k ) such that 〈pi − pj , p i − pk〉 6 ε 2 ns .If for some i, j, k we have 〈pi −pj , p i −pk 〉 6 ε 2 ns then there are at least (1 − ε 2 )n coordinates t ∈ { 1, . . . , n } for which pit = pjt or pit = pkt . The probability of the latter event is at most ( n ε 2n ) ( 2 |X0| )(1 − ε 2)n 6 2n−(1 − ε 2)( d0−2) n . So the expectation of the number of such triples is at most (2 m)32n−(1 − ε 2)( d0−2) n 6 8m2(1 −ε)nd 0 2−(1 − ε 2)nd 0+3 n ≪ m. (8) Thus there are points p1, . . . , p 2m with at most m “bad” triples. Remove one point from each of these triples and obtain a set X ⊂ Xn 0 ⊂ Rnd 0 of cardinality at least m = √2(1 −ε)nd 0 such that for any two points x, y ∈ X we have |x − y|2 6 R2n and for any three points x, y, z ∈ X we have 〈x − y, x − z〉 > ε 2 ns . This means that the angle α between vectors x − y, x − z satisfies cos α > ε 2 s/R 2 and thus depends on ε only. In the proof of Theorem 2, we shall need the following lemma. Lemma 1. Suppose X ⊂ Rd, |X| = N > d + 1 and the convex hull conv( X) has non-zero volume. Then for any c ∈ [12 d log 2 N N , 1] there are sets A ⊂ B ⊂ X such that 1. |B \ A| > c 3dlog 2N N.2. 0 6 = Vol(conv( B)) 6 (1 + c)Vol(conv( A)) .Proof. By Carath´ eodory’s theorem, every point of conv( X) lies in the convex hull of some d + 1 points of X, so by the pigeonhole principle, there is a set X0 ⊂ X of size d + 1 such that Vol(conv( X0)) > ( Nd + 1 )−1 Vol(conv( X)) > N−d−1Vol(conv( X)) . Take any chain X0 ⊂ X1 ⊂ . . . ⊂ Xm = X, such that |Xi+1 \ Xi| ∈ c 3dlog 2N N, c 2dlog 2N N. We have m > 2d log 2 N c , so if we had Vol(conv( Xi+1 )) > (1 + c)Vol(conv( Xi)) for all i, then Vol(conv( X)) > (1 + c)mVol(conv( X0)) > 22d log 2 N Vol(conv( X0)) > Vol(conv( X)) , a contradiction. Proof of Theorem 2. Take a set X ⊂ Rd which determines only angles at most π 2 − α for a sufficiently small α > 0. Put ε = sin α. It is easy to see that for any three different points x, y, z ∈ X 〈y − x, z − x〉 > ε‖y − x‖‖ z − x‖ > 1.5ε‖z − x‖2, (9) where the last inequality follows from the fact that ‖y−x‖ ‖z−x‖ = sin ∠xzy sin ∠zyx sin ∠xzy > sin 2 α > 1.5ε for sufficiently small α. Doing the same calculation for both z − x and x − z as the second vector in the scalar product in (9), we get that for any three distinct x, y, z we have 1.5ε2‖z − x‖2 < 〈y − x, z − x〉 < (1 − 1.5ε2)‖z − x‖2. (10) 5Applying Lemma 1 with c = 1 we get sets A ⊂ B such that 0 6 = Vol(conv B) 6 2Vol(conv A) and |B \ A| > |X| 4d2 . Take λ = 1 2 · (1 − 1.5ε2)−1, from (10) we see that for any dis-tinct x, z ∈ B \ A we have ((1 − λ)x + conv( λA )) ∩ ((1 − λ)z + conv( λA )) = ∅. Indeed, for any point y from the first set we have 〈y−x, z −x〉 < λ (1 −1.5ε2)‖z−x‖2 = 1 2 ‖z−x‖2, while for any y′ from the second set we have 〈y′ −x, z −x〉 > (1 −λ)‖z −x‖2 +λ·1.5ε2‖z −x‖2 = 1 2 ‖z −x‖2.Moreover, (1 − λ)x + conv( λA ) ⊂ conv B for any x ∈ B, so |B \ A|λdVol(conv A) 6 Vol(conv B) 6 2Vol(conv A), (11) thus |X| 6 4d2|B \ A| 6 8d2λ−d = 8 d22d (1 − 1.5ε2)d 6 (2 − α2)d, (12) provided that d is sufficiently large and α > 0 is sufficiently small. (Here we used that lim α→0+ sin α α = 1.) Acknowledgements: We thank the reviewers for carefully reading the manuscript and suggesting numerous changes that helped to improve the exposition. References [AB] E. Ackerman and O. Ben-Zwi, On sets of points that determine only acute angles ,European Journal of Combinatorics 30 (2009), N4, 908–910. [B] D. Bevan, Sets of points determining only acute angles and some related colouring prob-lems , the electronic journal of combinatorics 13 (2006), N1, paper 12. [Bu] L. V. Buchok, Two New Approaches to Obtaining Estimates in the Danzer–Gr¨ unbaum Problem , Math. Notes, 87 (2010), N4, 489–496. [C] H. T. Croft, On 6-Point Configurations in 3-Space, Journal of the London Mathematical Society 1 (1961), N1, 289–306. [DG] L. Danzer and B. Gr¨ unbaum, ” ¨Uber zwei Probleme bez¨ uglich konvexer K¨ orper von P. Erd˝ os und von VL Klee , Mathematische Zeitschrift 79 (1962), N1, 95–99. [EF] P. Erd˝ os and Z. F¨ uredi, The greatest angle among n points in the d-dimensional Eu-clidean space , Annals of Discrete Mathematics 17 (1983), 275–283. [GH] B. Gerencs´ er and V. Harangi, Acute sets of exponentially optimal size , Discrete & Computational Geometry 62 (2019), N4, 775–780. [H] V. Harangi, Acute sets in Euclidean spaces , SIAM Journal on Discrete Mathematics 25 (2011), N3, 1212–1229. [M] Matouˇ sek, Jiˇ r´ ı. Lectures on discrete geometry . Vol. 212. New York: Springer, 2002. 6[Kup] A. Kupavskii, Number of double-normal pairs in space , Discrete and Computational Geometry 56 (2016), N3, 711–726. [Z] D. Zakharov, Acute sets , Discrete & Computational Geometry 61 (2019), N1, 212–217. 7
188817
https://www.98thpercentile.com/blog/inches-to-feet-conversion-guide
Inches to Feet: Tips for Quick Conversions BOOK A FREE TRIAL NOW Home Programs Math ELA Coding Public Speaking Pricing Math ELA Coding Public Speaking Student Corner Online events Success Stories ElevatEd (Blog) Contest BOOK A FREE TRIAL NOWLog In Please Select × Students/Staff Parents ElevatEd.Elevating Education for Academic Excellence ElevatEd Math English Coding Public Speaking Parent’s Corner Online Courses ElevatEd Math English Coding Public Speaking Parent’s Corner Online Courses Inches to Feet: Conversion Guide ElevatEd Math November 8, 2024 We regularly come across different units of measurement in our daily lives, especially when referring to lengths and distances. Inches and feet are two of the Imperial system's most often used units. Both feet and inches are important in industries like engineering, interior design, and construction because feet offer a wider view for longer distances, while inches are typically utilized for smaller, more accurate measures. Anyone who wants to work with measures efficiently has to know how to convert between inches and feet. It's also an essential ability. We'll examine the similarities and differences between these two units in this tutorial, offer straightforward conversion techniques, and highlight practical uses to make sure you have all the resources you need for precise measurements in your work. Start Solving FREE Math Problems Today! Understanding the Basics: Inches and Feet In the Imperial system, which is widely used in the United States and a few other nations, an inch and a foot are both measures of length. The first step to comprehending conversions is to grasp their connection. Inch: The inch is a more compact unit of measurement that is frequently used for accurate measurements, including figuring out the width of a board or the size of tiny items like screws or nails. A foot is equal to one-twelfth of an inch. Foot: A foot is a longer unit of measurement that is commonly used to measure lengths of rooms or people's heights. A foot is twelve inches long. Key fact: 1foot = 12 inches Conversion Formula You just need to divide the inch count by 12 to convert inches to feet. In contrast, multiply by 12 to convert feet to inches. Inches to Feet: Feet=inches/12 Feet to Inches: Inches=feet×12 Tips for Quick Conversions Possessing these conversion strategies in your toolbox helps facilitate and expedite the process. Here are a few brief pointers: Memorize Basic Conversions: Knowing that 1 foot = 12 inches is vital. Learn the basic multiples of twelve, such as 24 inches (2 feet), 36 inches (3 feet), and so on. Use a Conversion Chart: A conversion chart may be a useful tool if you work with inches and feet on a regular basis. A chart can give quick look-up data and aid with bigger conversions. Use a Calculator: Using a calculator is the easiest approach to assure accuracy when converting huge numbers or when precise calculations are required. Mental Arithmetic for Easy Conversions: Practice mentally dividing and multiplying smaller amounts by 12. Conversions will become almost instantaneous as a result of this eventually becoming second nature. Common Applications of Inches to Feet Conversions There are several real-world scenarios where converting between inches and feet is helpful: Building: Both contractors and builders frequently collaborate with one another. For instance, whereas supplies like timber are measured in inches, designs may be made on feet. Interior Design: Conversions provide precise proportions to match your area when measuring flooring, furniture, or drapes. Travel and Aviation: Inches are frequently used to indicate height constraints for luggage and airline seats. On the other hand, certain ride attractions or buildings may have general height limitations expressed in feet. Converting with Online Tools and Apps Many online calculators and apps can instantly convert inches to feet and vice versa in the digital age. The built-in calculator on the majority of smartphones also makes it simple to divide and multiply by 12. These are useful tools, particularly for larger-number conversions or while you're on the road. The ability to convert inches to feet is useful, particularly in the travel, design, and construction industries. You can do computations fast and precisely if you know the basic conversion connection between these units. The ability to convert inches to feet can come in handy whether you're working on bigger projects or using measurements from around the house. With some experience and these conversion techniques, you'll become an expert at switching between these two crucial units of measurement. FAQs (Frequently Asked Questions) 1. What is the process for converting inches to feet and inches? Ans. For example, if you have 78 inches, divide by 12. The result is 6 feet with a remainder of 6 inches, so it equals 6 feet and 6 inches. 2. Are there any common applications for converting inches to feet? Ans. Yes, this conversion is commonly used in construction, interior design, and travel to ensure accurate measurements for building materials, furniture, and space requirements. 3. How can I quickly convert between inches and feet without calculations? Ans. You can use a conversion chart or an online calculator for quick reference. Many smartphones also have built-in calculators that make conversions easy. 4. What is the importance of mastering inches-to-feet conversions? Ans. Understanding these conversions is crucial for accuracy in measurements, which can save time and prevent costly mistakes in projects like renovations, landscaping, and purchasing furniture. 5. Are there any tools or resources that can help with conversions? Ans. Yes! There are numerous online tools, mobile apps, and even conversion charts that can assist with converting inches to feet and vice versa, making the process quick and efficient. Book FREE Math Trial Classes Now! Related Articles 1.Best Online Math Classes for Effective Learning 2. Effective Online Math Learning Tools 3.Understanding Different Math Tests and Formats 4. Interactive Learning Tools Used in Math Quest Programs 98thPercentile Team Welcome to the ElevatEd blog space, where we share crucial information on Math, English, Coding, and Public Speaking. Explore, network, and connect with like-minded students to begin elevating your education here at ElevatEd. Try Free Classes for 1 Week First Name Last Name Email Phone Number Time Zone Programs Interested [x] Math [x] ELA (English) [x] Coding [x] Public Speaking [x] I consent to being contacted by an AI agent to enhance my experience with 98thPercentile. Submit Explore Topics Math(100) English(100) Coding(100) Public Speaking(100) SAT(16) Contest(0) Parents’ Corner How to The Power of Repetition: How Boring Practice Brings Phenomenal Success ElevatEd June 17, 2025 How to Bridging Learning Gaps at Home: A Step-by-Step Guide for Parents ElevatEd July 22, 2025 How to 5 Signs Your Child Is Falling Behind in School (And How to Help) ElevatEd July 22, 2025 How to How to Nurture Logical Thinking for Modern Learning | 98thPercentile ElevatEd June 27, 2025 How to Fun Educational Tools to Use in a Classroom | 98thPercentile ElevatEd June 23, 2025 How to Cracking the Code: Creating the Optimal Study Schedules for Your Child ElevatEd June 17, 2025 How to The Power of Repetition: How Boring Practice Brings Phenomenal Success ElevatEd June 17, 2025 How to Bridging Learning Gaps at Home: A Step-by-Step Guide for Parents ElevatEd July 22, 2025 @98thPercentile on Instagram Career Drive LLC (98thPercentile) 2451 W. Grapevine Mills Circle Grapevine, TX, US 76051 Follow Us @ Company About us Contact us Career Parent Resources Blog Resource Center Site Map Other Resources Terms & Conditions Privacy Policy © 2021 98thPercentile. All Rights Reserved. Back to Top
188818
https://m.fx361.cc/news/2025/0430/27051433.html
小学数学模型意识培养策略探究_参考网 APP下载 搜索 小学数学模型意识培养策略探究 2025-04-30 周茂华 湖北教育·教育教学订阅 2025年4期 收藏 使用浏览器的分享功能,把这篇文章分享出去 关键词:情境数学模型 “模型意识”作为小学数学核心素养的重要组成部分,强调从具体情境中抽象出数学问题,构建数学模型并加以应用。本文聚焦模型意识,分析课程标准要求的演变,探析其培养策略。 一、课标要求演变:从隐性渗透到显性建构 《义务教育数学课程标准(2011年版)》虽提及模型思想,但其主要隐含在问题解决教学中,侧重通过例题训练学生的解题技巧,这就导致教师对数学思维培养的关注不足。《义务教育数学课程标准(2022年版)》明确将“模型意识”作为小学阶段的数学核心素养要素,要求学生在“情境感知—抽象建模—解释应用”中形成结构化思维。笔者基于课程标准要求,分析不同学段学生模型意识培养的侧重点。 第一学段学生主要通过操作、画图等方式表达简单情境中的数量关系(如用圆圈图表示加减法)。教师在设计教学时,可以设置“分物”“比多少”等操作性活动,让学生在拆分、组合实物的过程中自然建构加减法的初始模型。例如,教师通过“分发糖果”的情境帮助学生理解“总数=部分+部分”,利用小棒摆出“8-3=5”的直观算式模型,使抽象的数学符号与具象的实物操作形成紧密的关联。需要注意的是,第一学段应避免过早引入术语,而要用讲故事、画一画等方式帮助学生建立对数学模型的初步认知,为后续进行抽象的数学建模打下基础。 第二学段学生通常能识别常见的数学模型(如“速度×时间=路程”模型等),并解释模型的含义。在设计教学时,教师可以选取乘法分配律、归一问题等典型问题,搭建结构化教学内容体系。例如,教师可以引导学生对比购买文具、计算行程时间等不同场景下计算的异同点,进而提炼出“单位量×总数=总量”的文字化公式;还可以借助线段图、表格等简单的数学工具,增强学生将具体问题抽象为可迁移的数学模型的思维能力。 第三学段学生能够自主构建方程、函数等模型,并利用模型解决比较复杂的数学问题。在设计教学时,教师可以创设真实的问题情境,如资源配置、变量关系分析的问题,让学生在解决现实问题的过程中体会符号表征的必要性和便捷性。例如,对“已知两种商品价格差与数量关系,求单价”的问题,教师可以指导学生用英文字母表示未知量,将生活语言转化为符号,让学生经历“去情境化”的思维提升过程,理解并体会数学模型的作用和魅力,形成“实际问题—数学表达—模型求解—现实检验”的思维闭环。 笔者对比发现,新课程标准更强调“真实问题驱动”,注重引导学生通过数据收集、模型构建完成决策,在学与用的过程中充分领悟模型思想的重要价值。 二、素养培养策略:“四环”高效课堂实践路径 笔者基于新课程标准对模型意识的分层培养要求,结合不同学段学生的认知特点,运用“情境—探究—迁移—反思”四环教学模式进行模型意识培养的课堂教学实践。以《折扣问题》教学为例,笔者以真实问题为起点,通过分层任务驱动学生经历完整的数学建模过程,并在实践环节融入多元评价与差异化指导,以体现新课程标准所要求的从知识本位向素养导向的教学转变。具体教学过程如下。 首先,笔者出示“书店周年庆促销”的真实情境,呈现“全场图书八折优惠”与“满100元减20元”两种促销方式,并抛出驱动性问题“买一本标价120元的书,哪种购买方式更划算?”学生凭借经验思考问题,但意见出现分歧。笔者顺势引导:“如何用数学方法帮助我们精准决策呢?我们通过分析具体例子,对比价格差异。”学生分组探究,先计算出八折后价格为96元(120×80%=96),提炼出“现价=原价×折扣率”的公式,然后根据满减优惠方案,计算出折扣价格为100元(120-20=100),学生对比分析得出“满减门槛不同,优惠力度不同”的结论。笔者总结:“折扣率”与“满减”本质上均为“原价与优惠比例的对应模型”。最后,笔者引导学生迁移应用,通过变式问题深化学生对模型的理解。在笔者引导下,学生计算“第二件半价”“买三送一”等促销方式的实际折扣率。经过计算和分析,学生发现:第二件半价的情况下,如原价20元的奶茶,两杯总价为30元,原价总和为40元,折扣率为30÷40=75%;买三送一的情况下,四件原价可以表示为4a元,现在只要支付3a元,优惠了a元,折扣率是3a÷4a=75%,即七五折,优惠了25%。笔者趁热打铁,引导学生逆向应用,让学生根据已知的现价与折扣率,反推原价。学生举例:现价72元的商品打九折,原价用72÷90%计算,得出80元。笔者提出“解决折扣问题最关键的一步是什么?”“如何避免混淆折扣率与满减规则?”的问题,引导学生反思学习过程,提炼学习经验。学生根据以上学习,总结得出:解决折扣问题最关键的一步?是明确实际支付金额与原价总和的比例关系;解决折扣问题的核心是通过数学建模量化优惠力度,可以通过分步计算和对比,避免规则的混淆。 以上教学,学生经历了“生活问题—数学建模—策略优化”的完整知识建构过程,不仅掌握了折扣问题模型,还在真实决策的过程中充分体会到数学模型的应用价值,提升了理性消费的意识,实现了知识学习向素养生成的转化。 (作者单位:广东省台山市大江镇中心小学) 文字编辑 张敏 展开全文▼ 展开全文▼ 猜你喜欢 情境数学模型 一半模型情境引领追问促深不同情境中的水重要模型『一线三等角』重尾非线性自回归模型自加权M-估计的渐近分布护患情境会话特定情境,感人至深3D打印中的模型分割与打包我为什么怕数学数学到底有什么用? 杂志排行 《师道·教研》2024年10期 《思维与智慧·上半月》2024年11期 《现代工业经济和信息化》2024年2期 《微型小说月报》2024年10期 《工业微生物》2024年1期 《雪莲》2024年9期 《世界博览》2024年21期 《中小企业管理与科技》2024年6期 《现代食品》2024年4期 《卫生职业教育》2024年10期 湖北教育·教育教学 2025年4期 湖北教育·教育教学的其它文章 教科研活动简况 汽车“疗法” 遇见风景 巧用直观图示复习生物学知识 基于“教-学-评”一体化的物理深度学习 四性:小学数学作业设计的基本原则
188819
https://groupprops.subwiki.org/wiki/Conjugacy_class_size_formula_in_symmetric_group
Jump to content Groupprops Search Want site search autocompletion? See here Encountering 429 Too Many Requests errors when browsing the site? See here Conjugacy class size formula in symmetric group From Groupprops Statement Suppose is a natural number and is an unordered integer partition of such that has parts of size for each . In other words, there are s, s, s, and so on. Let be the conjugacy class in the symmetric group of degree comprising the elements whose cycle type is , i.e., those elements whose cycle decomposition has cycles of length for each . Then: Note that those where contribute a in the denominator and can be ignored from the product, while for those where , the term can be omitted. Equivalently, if is the centralizer of any element of , then: These are equivalent because size of conjugacy class equals index of centralizer, which follows from the identification of the conjugacy class with the left coset space of the centralizer via the action of the group on itself as automorphisms by conjugation. Related facts Cycle type determines conjugacy class Splitting criterion for conjugacy classes in the alternating group Examples Illustrative examples For instance, consider with the partition . There are four 3s, three 2s, and five 1s. An example element with this cycle type is given by the cycle decomposition: The size of the conjugacy class corresponding to this partition is: Here's another example: . There is one 5 and two 4s, and we get: When a particular has (i.e., it occurs only once in the partition) then the corresponding term divided is , so the above can be written more briefly: Similarly, consider . We get: Comprehensive treatment of small degrees In the right column links in the table below, you can see tabulated information on the sizes of conjugacy classes, as well as how the formula is applied to the cycle sizes to compute each specific size. The cases are embedded below. | Degree | Symmetric group | List of conjugacy class sizes | Element structure page | Section on conjugacy class structure interpreted as symmetric group | --- --- | 3 | symmetric group:S3 | 1,2,3 | element structure of symmetric group:S3 | element structure of symmetric group:S3#Interpretation as symmetric group | | 4 | symmetric group:S4 | 1,3,6,6,8 | element structure of symmetric group:S4 | element structure of symmetric group:S4#Interpretation as symmetric group | | 5 | symmetric group:S5 | 1,10,15,20,20,24,30 | element structure of symmetric group:S5 | element structure of symmetric group:S5#Interpretation as symmetric group | | 6 | symmetric group:S6 | 1,15,15,40,40,45,90,90,120,120,144 | element structure of symmetric group:S6 | element structure of symmetric group:S6#Interpretation as symmetric group | | 7 | symmetric group:S7 | 1,21,70,105,105,210,210,280, 420,420,504,504,630,720,840 | element structure of symmetric group:S7 | element structure of symmetric group:S7#Interpretation as symmetric group | | 8 | symmetric group:S8 | 1, 28, 105, 112, 210, 420, 420, 1120, 1120, 1120, 1260, 1260, 1344, 1680, 2520, 2688, 3360, 3360, 3360, 4032, 5040, 5760 | element structure of symmetric group:S8 | element structure of symmetric group:S8#Interpretation as symmetric group | | Partition | Partition in grouped form | Verbal description of cycle type | Elements with the cycle type in cycle decomposition notation | Elements with the cycle type in one-line notation | Size of conjugacy class | Formula for size | Even or odd? If even, splits? If splits, real in alternating group? | Element order | Formula calculating element order | --- --- --- --- --- | | 1 + 1 + 1 | 1 (3 times) | three fixed points | -- the identity element | 123 | 1 | | even; no | 1 | | | 2 + 1 | 2 (1 time), 1 (1 time) | transposition in symmetric group:S3: one 2-cycle, one fixed point | , , | 213, 321, 132 | 3 | | odd | 2 | | | 3 | 3 (1 time) | 3-cycle in symmetric group:S3: one 3-cycle | , | 231, 312 | 2 | | even; yes; no | 3 | | | Total (3 rows -- 3 being the number of unordered integer partitions of 3) -- -- | 6 (equals 3!, the size of the symmetric group) odd: 3even;no: 1even; yes; no: 2 | order 1: 1, order 2: 3, order 3: 2 | Partition | Partition in grouped form | Verbal description of cycle type | Elements with the cycle type | Size of conjugacy class | Formula for size | Even or odd? If even, splits? If splits, real in alternating group? | Element order | Formula calculating element order | --- --- --- --- | 1 + 1 + 1 + 1 | 1 (4 times) | four cycles of size one each, i.e., four fixed points | -- the identity element | 1 | | even; no | 1 | | | 2 + 1 + 1 | 2 (1 time), 1 (2 times) | one transposition (cycle of size two), two fixed points | , , , , , | 6 | , also | odd | 2 | | | 2 + 2 | 2 (2 times) | double transposition: two cycles of size two | , , | 3 | | even; no | 2 | | | 3 + 1 | 3 (1 time), 1 (1 time) | one 3-cycle, one fixed point | , , , , , , , | 8 | or | even; yes; no | 3 | | | 4 | 4 (1 time) | one 4-cycle, no fixed points | , , , , , | 6 | or | odd | 4 | | | Total (5 rows, 5 being the number of unordered integer partitions of 4) -- 24 (equals 4!, the order of the whole group) odd: 12 (2 classes)even; no: 4 (2 classes)even; yes; no: 8 (1 class) | order 1: 1 (1 class) order 2: 9 (2 classes) order 3: 8 (1 class) order 4: 6 (1 class) | Partition | Partition in grouped form | Verbal description of cycle type | Representative element with the cycle type | Size of conjugacy class | Formula calculating size | Even or odd? If even, splits? If splits, real in alternating group? | Element order | Formula calcuating element order | --- --- --- --- | 1 + 1 + 1 + 1 + 1 | 1 (5 times) | five fixed points | -- the identity element | 1 | | even; no | 1 | | | 2 + 1 + 1 + 1 | 2 (1 time), 1 (3 times) | transposition: one 2-cycle, three fixed point | | 10 | or , also in this case | odd | 2 | | | 3 + 1 + 1 | 3 (1 time), 1 (2 times) | one 3-cycle, two fixed points | | 20 | or | even; no | 3 | | | 2 + 2 + 1 | 2 (2 times), 1 (1 time) | double transposition: two 2-cycles, one fixed point | | 15 | or | even; no | 2 | | | 4 + 1 | 4 (1 time), 1 (1 time) | one 4-cycle, one fixed point | | 30 | or | odd | 4 | | | 3 + 2 | 3 (1 time), 2 (1 time) | one 3-cycle, one 2-cycle | | 20 | or | odd | 6 | | | 5 | 5 (1 time) | one 5-cycle | | 24 | or | even; yes; yes | 5 | | | Total (7 rows, 7 being the number of unordered integer partitions of 5) -- 120 (equals order of the group) odd: 60 (3 classes)even;no: 36 (3 classes)even;yes;yes: 24 (1 class) | (javascript:toggleDisplay( ) order 1: 1 (1 class)order 2: 25 (2 classes)order 3: 20 (1 class)order 4: 30 (1 class)order 5: 24 (1 class)order 6: 20 (1 class) | | Retrieved from "
188820
https://books.google.com/books/about/Water_Supply_Engineering.html?id=8a1DDAAAQBAJ
Water Supply Engineering - Verma Subhash/KanwarVarinder & John Siby - Google Books Sign in Hidden fields Try the new Google Books Books View sample Add to my library Try the new Google Books Check out the new look and enjoy easier access to your favorite features Try it now No thanks Try the new Google Books My library Help Advanced Book Search Good for: Web Tablet / iPad eReader Smartphone#### Features: Flowing text Scanned pages Help with devices & formats Learn more about books on Google Play Buy eBook - $9.99 Get this book in print▼ Amazon.com Barnes&Noble.com Books-A-Million IndieBound Find in a library All sellers» My library My History Water Supply Engineering ======================== Verma Subhash/KanwarVarinder & John Siby Vikas Publishing House, 2015 - Technology & Engineering - 266 pages This book completely covers a one-semester course on potable water supply systems in a single, compact volume for undergraduate students. It covers all the three main topics—sources of water supply, water treatment and water distribution. Using the latest tools and methods, it conceptualizes and formulates the resource allocation problems, and deals appropriately with the complexity of constraints in the demand and available supplies of water. The book integrates the concepts of chemistry, biology and hydraulics as applicable to water supply engineering. It presents the basic and applied principles and most recent practices and technologies. Apart from the students of water supply engineering, practising engineers, professionals and researchers will benefit from the book. IMPORTANT FEATURES • Exhaustive coverage of three main topics, viz., sources of water supply, water treatment, and water distribution • Concepts and design practices illustrated with the help of solved examples • All related topics discussed in context of principles of sustainability, affordability, effectiveness, efficiency, and appropriateness • Step-wise solution to problems, with stress on unit cancellation in calculations • Updated data from Bureau of Indian Standards • More than 70 solved examples, 70 true/false questions and 325 multiple choice questions More » Preview this book » Selected pages Page 2 Page 7 Page 1 Title Page Table of Contents Contents Chapter_011 Chapter_027 Chapter_03 23 Chapter_04 38 Chapter_05 56 Chapter_06 76 Chapter_07 92 Chapter_08 111 Chapter_10 160 Chapter_11 173 Chapter_12 182 Chapter_13 198 Chapter_14 216 Chapter_15 237 Appendices 249 Index262 More Chapter_09 133 Copyright Less Common terms and phrases activated carbonalkalinityaquiferbackwashbacteriabasinCaCO3calciumcalcium hypochloriteCalculatecapacitycarbonchemicalchloramineschlorine demandchlorine dioxidechlorine dosagecoagulationcoefficientconcentrationcontainscorrosioncurvediameterdischargedisinfectiondistribution systemdrawdownefficiencyfilter runfiltration rateflocflocculationflow rateflow velocityfluoridefriction factorgivengroundwaterhardnessHazen-Williams equationhead losshydrantshydraulichypochloriteincreaseindicatesintakeionsjar testingkg/dlimemanganesemaximummethodmg/LmixingML/doperationoverflow rateoxidationparticlespipepipe materialspipelinepopulationpressurepumping ratequantity of waterraw waterremovalreservoirsand filtersedimentationSlow sand filtersludgesoda ashsodiumsodium hypochloritesofteningsolidssolutionSolved Problemstoragesuctionsurface watertanktaste and odourtemperatureturbidityvalvewater distributionwater levelwater mainwater qualitywater supplywater treatmentzeolite Bibliographic information Title Water Supply Engineering AuthorVerma Subhash/KanwarVarinder & John Siby Publisher Vikas Publishing House, 2015 ISBN 9325984253, 9789325984257 Length 266 pages SubjectsTechnology & Engineering › Civil › General Technology & Engineering / Civil / General Export CitationBiBTeXEndNoteRefMan About Google Books - Privacy Policy - Terms of Service - Information for Publishers - Report an issue - Help - Google Home
188821
http://aleph0.clarku.edu/~djoyce/elements/bookIII/propIII1.html
Euclid's Elements, Book III, Proposition 1 Euclid's Elements =================Book III ======== Proposition 1 To find the center of a given circle. Let ABC be the given circle. It is required to find the center of the circle ABC. I.10 I.11, I.10 Draw a straight line AB through it at random, and bisect it at the point D. Draw DC from D at right angles to AB, and draw it through to E. Bisect CE at F. I say that F is the center of the circle ABC. For suppose it is not, but, if possible, let G be the center. Join GA, GD, and GB. I.Def.15 I.8 Then, since AD equals DB, and DG is common, the two sides AD and DG equal the two sides BD and DG respectively. And the base GA equals the base GB, for they are radii, therefore the angle ADG equals the angle GDB. I.Def.10 But, when a straight line standing on a straight line makes the adjacent angles equal to one another, each of the equal angles is right, therefore the angle GDB is right. But the angle FDB is also right, therefore the angle FDB equals the angle GDB, the greater equals the less, which is impossible. Therefore G is not the center of the circle ABC. Similarly we can prove that neither is any other point except F. Therefore the point F is the center of the circle ABC. Q.E.F. Corollary From this it is clear that if in a circle a straight line cuts a straight line into two equal parts and at right angles, then the center of the circle lies on the cutting straight line. Guide ----- Since the definition of a circle, I.Def.15, includes the existence of a center, Euclid is justified in taking a point G as the center. In this proof G is shown to lie on the perpendicular bisector of the line AB. He leaves to the reader to show that G actually is the point F on the perpendicular bisector, but that’s clear since only the midpoint F is equidistant from the two points C and E on the circle. From that observation it also follows that the center of a circle is unique, although the uniqueness can easily be proved in other ways. As Todhunter remarked, Euclid implicitly assumes that the perpendicular bisector of AB actually intersects the circle in points C and E. Use of this proposition and its corollary About half the proofs in Book III and several of those in Book IV begin with taking the center of a given circle, but in plane geometry, it isn’t necessary to invoke this proposition III.1 since the only way that circles can occur is if they were constructed around a center to begin with. Even in solid geometry, the center of a circle is usually known so that III.1 isn’t necessary. Indeed, that is the case whenever the center is needed in Euclid’s books on solid geometry (see XI.23, XIII.9 through XIII.13, and XIII.16). Sections of spheres cut by planes are also circles as are certain plane sections of cylinders and cones, and as the spheres, cylinders, and cones were generated by rotating semicircles, rectangles, and triangles about their sides, the center of the circle is known to be at the intersection of the side and the plane. In that sense, this proposition is redundant. The corollary is used in propositions III.9 and III.10. Next: III.2 Previous: III.Def.11 Book III ©1996, 1997 David E. Joyce Department of Mathematics and Computer Science Clark University Worcester, MA 01610
188822
https://www.numerade.com/ask/question/methane-ethane-mass-spectrum-mass-spectrum-100-100_-1-mz-mz-nist-chemistry-webbook-https-ilebbook-nist-govlchemistry-figure-11-the-mass-spectrum-of-methane-and-ethane-q31-the-mass-spectrum-f-98517/
Rel. Intensity 100 80 60 40 20 Methane MASS SPECTRUM 100 Ethane MASS SPECTRUM 80 60 40 20 0.0 11 12 13 14 15 16 17 18 0.0 5 10 15 20 25 30 35 m/z m/z NIST Chemistry WebBook ( FIGURE 11. The mass spectrum of methane and ethane. =========== Q 3.1. The mass spectrum for ethane is given in FIG. 11. What is the identity of the particles that give rise to the signals at 15 m/z? 30? 29? =========== Log inSign up This problem has been solved by verified expert Madhur L 100% free to try - get full access instantly Start Free Trial Guest user Add your school Home Textbooks Ace Live More FlashcardsNotes & ExamsBootcamps Numerade Plus Expert Educators Textbook Videos Unlimited Ace chats AI Study Tools… Sign up for free Download App Question Rel. Intensity 100 80 60 40 20 Methane MASS SPECTRUM 100 Ethane MASS SPECTRUM 80 60 40 20 0.0 11 12 13 14 15 16 17 18 0.0 5 10 15 20 25 30 35 m/z m/z NIST Chemistry WebBook ( FIGURE 11. The mass spectrum of methane and ethane. =========== Q 3.1. The mass spectrum for ethane is given in FIG. 11. What is the identity of the particles that give rise to the signals at 15 m/z? 30? 29? =========== Rel. Intensity 100 80 60 40 20 Methane MASS SPECTRUM 100 Ethane MASS SPECTRUM 80 60 40 20 0.0 11 12 13 14 15 16 17 18 0.0 5 10 15 20 25 30 35 m/z m/z NIST Chemistry WebBook ( FIGURE 11. The mass spectrum of methane and ethane. Q 3.1. The mass spectrum for ethane is given in FIG. 11. What is the identity of the particles that give rise to the signals at 15 m/z? 30? 29? Show more… Added by Briana J. Close Chemistry: Structure and Properties Nivaldo Tro 2nd Edition Instant Answer Solved by Expert Madhur L 03/13/2023 Step 1 The molecular weight of ethane is 2×12+6×1=30. At 15 m/z, the signal is likely due to the fragmentation of ethane into a methyl cation, which has a molecular formula of CH 3+. The molecular weight of the methyl cation is $12 + 3 Show more… Show all steps View the full answer Video Player is loading. Play Video Play Mute Current Time 0:00 / Duration 0:00 Loaded: 0% Progress: 0% Stream Type LIVE Remaining Time-0:00 Playback Rate 2x 1.5x 1x, selected 0.5x 1x Chapters Chapters Descriptions descriptions off, selected Captions captions settings, opens captions settings dialog captions off, selected English Auto Captions Audio Track Fullscreen This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color Transparency Background Color Transparency Window Color Transparency Font Size Text Edge Style Font Family Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Video Player is loading. Play Video Play Mute Current Time 0:00 / Duration 0:00 Loaded: 0% Progress: 0% Stream Type LIVE Remaining Time-0:00 Playback Rate 2x 1.5x 1x, selected 0.5x 1x Chapters Chapters Descriptions descriptions off, selected Captions captions settings, opens captions settings dialog captions off, selected Audio Track Fullscreen This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color Transparency Background Color Transparency Window Color Transparency Font Size Text Edge Style Font Family Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Please give Ace some feedback Your feedback will help us improve your experience Submit Thanks for your feedback! Methane Ethane MASS SPECTRUM MASS SPECTRUM 100 100_ 1 M/z m/z NIST Chemistry WebBook ( FIGURE 11: The mass spectrum of methane and ethane. Q3.1: The mass spectrum for ethane is given in FIG. 11. What is the identity of the particles that give rise to the signals at 15 m/z? 30? 29? Hi ! Need help with Play audio Play audio Feedback Powered by NumerAI Madhur L and 82 other Chemistry 101 educators are ready to help you. Ask a new question Key Concepts Key Concept Premium Feature Explore the core concept behind this problem. View Video View Video Key Concept Premium Feature Explore the core concept behind this problem. Your browser does not support the video tag. Close Try it Now Recommended Videos An unknown hydrocarbon has peaks at m/z = 29 and 43, and an M + peak at m/z= 58 in the mass spectrum. What is the chemical formula for the hydrocarbon? Adi S. The 43 m/z fragment is due to a α cleavage (homolytic cleavage). Draw the mechanism and label the molecular ion peak on the spectrum: 2-Butanone MASS SPECTRUM 100 L 4 m/z NIST Chemistry WebBook ( Sri K. Mass spectrum of pentane contains fragmentation peaks: 15, 29, 43. Please show fragmentation structures of pentane which correspond with signals found in the spectrum. 2. Corrected_text: Look at the mass spectrum of 2-methylhexane. What is the m/z value of the M+ peak and of the base peak? Give possible structures of the fragments giving rise to the large peaks at m/z = 85, 57, and 43. 3. Title_with_topic: Mass Spectra and Fragmentation Patterns of Pentane and 2-Methylhexane Sri K. Recommended Textbooks Chemistry: Structure and Properties Nivaldo Tro 2nd Edition 1,819 solutions Chemistry The Central Science Theodore L. Brown 14th Edition 1,175 solutions Chemistry Steven S. Zumdahl, Susan A. Zumdahl, Donald J. DeCoste 10th Edition 1,155 solutions Transcript 00:01 Hello students here in this mass spectroscopy, the line produced by heaviest ion is the molecular ion or peak at m by z ratio is at 30... Need help? Use Ace Ace is your personal tutor. It breaks down any question with clear steps so you can learn. Start Using Ace Ace is your personal tutor for learning Step-by-step explanations Instant summaries Summarize YouTube videos Understand textbook images or PDFs Study tools like quizzes and flashcards Listen to your notes as a podcast GET THE ANSWER Get Answer to This and Millions of Other Questions Question Methane Ethane MASS SPECTRUM MASS SPECTRUM 100 100_ 1 M/z m/z NIST Chemistry WebBook ( FIGURE 11: The mass spectrum of methane and ethane. Q3.1: The mass spectrum for ethane … Create your account Start a free 7-day trial to get step-by-step solutions. Name Email Password Create Account or Sign in with Google Signed in with Google Continue with Facebook Continue with Apple Continue with Apple Continue with Clever By creating an account, you agree to the Terms of Service and Privacy Policy Already have an account? Log In A free answer just for you Watch the video solution with this free unlock. View the Answer Log in to watch this video ...and 100,000,000 more! EMAIL PASSWORD Log in OR Sign in with Google Signed in with Google Continue with Facebook Continue with Apple Continue with Apple Continue with Apple Continue with Clever Don't have an account? Sign Up About Our Story Careers Our Educators Numerade Blog Newsroom Browse Bootcamps Books Directory Notes & Exams Ask our Educators Topics More Test Prep Ask Directory Notes Directory Online Tutors AP Content Subjects NEW Math Chemistry Physics Biology Economics Support Help Privacy Policy Terms of Service Get the app 680 E Colorado Blvd, Suite 180 - Pasadena, CA 91101
188823
https://www.effortlessmath.com/math-topics/finding-x-and-y-intercepts-in-the-standard-form-of-equation/?srsltid=AfmBOorAHmITzgDU0c6ls5kHZ4kd-KA0bAuWrrw0tZa_NsFQdwvuiP4N
How to Find x- and y-intercepts in the Standard Form of Equation? - Effortless Math: We Help Students Learn to LOVE Mathematics Effortless Math X +eBooks +ACCUPLACER Mathematics +ACT Mathematics +AFOQT Mathematics +ALEKS Tests +ASVAB Mathematics +ATI TEAS Math Tests +Common Core Math +CLEP +DAT Math Tests +FSA Tests +FTCE Math +GED Mathematics +Georgia Milestones Assessment +GRE Quantitative Reasoning +HiSET Math Exam +HSPT Math +ISEE Mathematics +PARCC Tests +Praxis Math +PSAT Math Tests +PSSA Tests +SAT Math Tests +SBAC Tests +SIFT Math +SSAT Math Tests +STAAR Tests +TABE Tests +TASC Math +TSI Mathematics +Worksheets +ACT Math Worksheets +Accuplacer Math Worksheets +AFOQT Math Worksheets +ALEKS Math Worksheets +ASVAB Math Worksheets +ATI TEAS 6 Math Worksheets +FTCE General Math Worksheets +GED Math Worksheets +3rd Grade Mathematics Worksheets +4th Grade Mathematics Worksheets +5th Grade Mathematics Worksheets +6th Grade Math Worksheets +7th Grade Mathematics Worksheets +8th Grade Mathematics Worksheets +9th Grade Math Worksheets +HiSET Math Worksheets +HSPT Math Worksheets +ISEE Middle-Level Math Worksheets +PERT Math Worksheets +Praxis Math Worksheets +PSAT Math Worksheets +SAT Math Worksheets +SIFT Math Worksheets +SSAT Middle Level Math Worksheets +7th Grade STAAR Math Worksheets +8th Grade STAAR Math Worksheets +THEA Math Worksheets +TABE Math Worksheets +TASC Math Worksheets +TSI Math Worksheets +Courses +AFOQT Math Course +ALEKS Math Course +ASVAB Math Course +ATI TEAS 6 Math Course +CHSPE Math Course +FTCE General Knowledge Course +GED Math Course +HiSET Math Course +HSPT Math Course +ISEE Upper Level Math Course +SHSAT Math Course +SSAT Upper-Level Math Course +PERT Math Course +Praxis Core Math Course +SIFT Math Course +8th Grade STAAR Math Course +TABE Math Course +TASC Math Course +TSI Math Course +Puzzles +Number Properties Puzzles +Algebra Puzzles +Geometry Puzzles +Intelligent Math Puzzles +Ratio, Proportion & Percentages Puzzles +Other Math Puzzles +Math Tips +Articles +Blog How to Find x- and y-intercepts in the Standard Form of Equation? In this comprehensive and step-by-step guide, you will learn how to find 𝑥 x- and 𝑦 y-intercepts when the standard form of equations is provided. The standard form of a linear equation in two variables is typically written as 𝐴 𝑥+𝐵 𝑦=𝐶 A x+B y=C, where 𝐴,𝐵 A,B, and 𝐶 C are constants and 𝑥 x and 𝑦 y are the variables. This form is sometimes also referred to as the “general form” or “standard linear form”. A Step-by-step guide to find 𝑥 x- and 𝑦 y-intercepts in the standard form of equation In the standard form of the equation, it is not immediately clear what the 𝑥 x- and 𝑦 y-intercepts of the graph of the equation are. However, it is still possible to find them using algebraic techniques. To find the 𝑥 x-intercept, we need to find the value of 𝑥 x when 𝑦=0 y=0. This means we can substitute 0 0 for 𝑦 y in the equation and solve for 𝑥 x. The resulting value of 𝑥 x will be the 𝑥 x-intercept. To find the 𝑦 y-intercept, we need to find the value of 𝑦 y when 𝑥=0 x=0. This means we can substitute 0 0 for 𝑥 x in the equation and solve for 𝑦 y. The resulting value of 𝑦 y will be the 𝑦 y-intercept. Finding 𝑥 x- and 𝑦 y-intercepts in the Standard Form of Equation – Examples 1 Find the 𝑥 x- and 𝑦 y-intercepts of the line 2 𝑥+3 𝑦=6 2 x+3 y=6 Solution: To find the 𝑥 x-intercept, we set 𝑦=0 y=0 and solve for 𝑥 x: 2 𝑥+3(0)=6 2 x+3(0)=6 2 𝑥=6 2 x=6 𝑥=3 x=3 So, the 𝑥 x-intercept is (3,0)(3,0). To find the 𝑦 y-intercept, we set 𝑥=0 x=0 and solve for 𝑦 y: 2(0)+3 𝑦=6 2(0)+3 y=6 3 𝑦=6 3 y=6 𝑦=2 y=2 So, the 𝑦 y-intercept is (0,2)(0,2). Therefore, the graph of the equation 2 𝑥+3 𝑦=6 2 x+3 y=6 intersects the 𝑥 x-axis at (3,0)(3,0) and the 𝑦 y-axis at (0,2)(0,2). Exercises forFinding 𝑥 x- and 𝑦 y-intercepts in the Standard Form of Equation Find the 𝑥 x- and 𝑦 y-intercepts of each line. 2 𝑥+𝑦=−4 2 x+y=−4 𝑥−𝑦=6 x−y=6 3 𝑥−2 𝑦=18 3 x−2 y=18 (−2,0),(0,−4)(−2,0),(0,−4) (6,0),(0,−6)(6,0),(0,−6) (6,0),(0,−9)(6,0),(0,−9) by: Effortless Math Team about 3 years ago (category: Articles) Effortless Math Team 2 years ago Effortless Math Team Related to This Article finding the x-interceptHow to Find the x-Intercept of a LineHow to Find the y-Intercept of a LineStandard formStandard Form of Linear EquationsThe Standard Form of Linear Equationx-Intercepty-Intercepty-Intercept of a Line More math articles How to Divide Rational Expressions? (+FREE Worksheet!) The Ultimate ASVAB Math Course (+FREE Worksheets & Tests) Understanding Betting Odds How to Use Area Models to Divide Three-digit Numbers By One-digit Numbers Distinguishing Angles: Acute, Right, Obtuse, and Straight The Ultimate AP Calculus AB Course Digital Tools for Teaching Math at a Distance 3rd Grade Georgia Milestones Assessment System Math Worksheets: FREE & Printable How to Find Maxima and Minima of a Function? Geometry Puzzle – Critical Thinking 17 What people say about "How to Find x- and y-intercepts in the Standard Form of Equation? - Effortless Math: We Help Students Learn to LOVE Mathematics"? No one replied yet. Leave a Reply Cancel reply You must be logged in to post a comment. ### The Story of Peter Paul Rubens An Inspiring Story for Kids $12.99 Download ### The Story of Caravaggio An Inspiring Story for Kids $12.99 Download ### The Story of Walter Camp An Inspiring Story for Kids $12.99 Download ### The Story of Gustav Klimt An Inspiring Story for Kids $12.99 Download ### The Story of Richard Wagner An Inspiring Story for Kids $12.99 Download ### The Story of Johann Wolfgang von Goethe An Inspiring Story for Kids $12.99 Download ### The Story of Dmitri Prigov An Inspiring Story for Kids $12.99 Download ### The Story of Marguerite Duras An Inspiring Story for Kids $12.99 Download ### The Story of Vladimir Vysotsky An Inspiring Story for Kids $12.99 Download ### The Story of Andrei Voznesensky An Inspiring Story for Kids $12.99 Download ### The Story of Paul Sartre An Inspiring Story for Kids $12.99 Download ### The Story of Gabriel García Márquez An Inspiring Story for Kids $12.99 Download ### The Story of Milan Kundera An Inspiring Story for Kids $12.99 Download ### The Story of Jerome David Salinger An Inspiring Story for Kids $12.99 Download ### The Story of Herman Melville An Inspiring Story for Kids $12.99 Download How to Write Linear Equations From Y-Intercept and A Slope How to Graph an Equation in the Standard Form? How to use Intercepts How to Compare Linear Functions Graphs and Equations How to Write the Standard Form of Linear Equations? How to Find the y-Intercept of a Line? How to Find the x-Intercept of a Line? - AFOQT Math ALEKS Math ASVAB Math ATI TEAS 6 Math CHSPE Math FTCE Math GED Math HiSET Math HSPT Math ISEE Upper Level Math SHSAT Math SSAT Upper-Level Math PERT Math Praxis Core Math SIFT Math 8th Grade STAAR Math TABE Math TASC Math TSI Math X 45 % OFF Limited time only! Save Over 45 % Take It Now! SAVE $40 It was ~~$89.99~~ now it is $49.99 Login Username or email address Password Log in- [x] Remember me Forgot Password?Register Login and use all of our services. Effortless Math services are waiting for you. login faster! Quick Register Register Email Already a user? Register Fast! Password will be generated automatically and sent to your email. After registration you can change your password if you want. Search in Effortless Math Dallas, Texas info@EffortlessMath.com Useful Pages Math Worksheets Math Courses Math Tips Math Blog Math Topics Math Puzzles Math Books Math eBooks GED Math Books HiSET Math Books ACT Math Books ISEE Math Books ACCUPLACER Books Math Services Premium Membership Youtube Videos Effortless Math provides unofficial test prep products for a variety of tests and exams. All trademarks are property of their respective trademark owners. About Us Contact Us Bulk Orders Refund Policy Effortless Math: We Help Students Learn to LOVE Mathematics - © 2025
188824
https://jjoc45.github.io/analysis/Lectures/L20.html
MT2002 Analysis | | | | --- | Previous page (The Weierstrass approximation theorem) | Contents | Next page (The boundedness theorem) | The intermediate value theorem The naive definition of continuity (The graph of a continuous function has no breaks in it) can be used to explain the fact that a function which starts on below the x-axis and finishes above it must cross the axis somewhere. The Intermediate Value Theorem : If f is a function which is continuous at every point of the interval [a, b] and f (a) < 0, f (b) > 0 then f (x) = 0 at some point x ∈ (a, b). Proof : The idea of the proof is to look for the first point at which the graph of f crosses the axis. Let X = {x ∈ [a, b] | f (y) ≤ 0 for all y ∈ [a, x]}. Then X is non-empty since a ∈ X and X ⊆ [a, b] so it is bounded. Hence by the Completeness Axiom, X has a least upper bound α (say). We claim that f (α) = 0. Proof of that: We will show that either of the assumptions f (α) > 0 or f (α) < 0 leads to a contradiction and the result then follows from the Trichotomy property of the Order Axiom. So suppose f (α) > 0. Say f (α) = ε. Then for some δ > 0 we have f (x) > 0 for x lying in the interval (α - δ, α + δ). But then α - δ would be an upper bound of X, contradicting the fact that α is the least upper bound. Similarly, suppose f (α) < 0. Say f (α) = -ε. Then for some δ > 0 we have f (x) < 0 for x lying in the interval (α - δ, α + δ). But this is a contradiction since α is an upper bound of X. This completes the proof. Remarks 1. You should compare this to the earlier proof of the existence of √2. The fact that these two proofs are so similar is no coincidence, since we could prove the existence of √2 by applying the intermediate value theorem to the function f (x) = x2 - 2 on the interval [1, 2]. 2. The reason that this result is called the Intermediate Value Theorem comes from the following corollary. Corollary : If f is a continuous function on the interval [a, b] and c is a real number between f (a) and f (b) then f (x) attains the value c at some point between a and b. Proof : Apply the theorem to the function f (x) - c. Applications 1. Any real polynomial of odd degree has a real root. Proof Suppose p(x) = xk+ ak-1xk-1+ ... + a1x + a0. Then as x→ ∞, p(x)→ ∞. Also, since k is odd, as x→ -∞, p(x)→ -∞. Hence one can find an interval on which p changes sign and so we must have a real root. 2. Solving an equation f (x) = 0. 1. The bisection method Find an interval [a, b] on which the function changes sign. Then evaluate f at the midpoint (a+b)/2 and choose whichever subinterval f changes sign on. Then repeat to get smaller and smaller subintervals. 2. The method of false position The same idea, but this time instead of taking the midpoint of the interval, linearly interpolate between (a, f (a)) and (b, f (b)) to decide where to split the interval. In fact, this is at the point c = (b f (a) - a f (b))/(f (a) - f (b)). These methods are both very robust and do not require assumptions about differentiability needed, for example, for the (usually) faster Newton-Raphson method. 3. The intermediate value theorem can be used to prove a variety of simple "real-life" results. 1. The mashed potato theorem A plate of mashed potato can be evenly divided by a single straight vertical knife cut. Proof In position K1 less than half the potato is at the left of the knife, in position K2 more than half is at the left. Hence (by the intermediate value theorem) there is an intermediate position where exactly half is at one side. You may care to think of how continuity should be involved in this argument. 2. The mashed potato and beans theorem A plate of mashed potato and baked beans can be evenly divided by a single straight vertical knife cut. Proof Choose one particular angle and the last result shows that you can divide the potatoes by a cut Kα at this angle. Then (say) there will be more than half the beans on the left of the cut. Now vary the angle continuously by π (180 degrees) until the knife is in the same position as before, but pointing the other way. At each angle, make sure you bisect the potatoes. Now less than half the beans are on the left and so you passed through an intermediate position where both beans and potatoes were divided fairly. This result even holds true if you pile the beans on top of the potato (or vice versa). 3. Note that the above two results were achieved with vertical cuts. If you allow cuts which are not vertical, you can use the extra "degree of freedom" to prove: The bowl of fruit theorem An apple, a pear and a banana can be equally divided by a single knife-cut. Proof Exercise. Remarks This last result is usually called the Ham Sandwich Theorem (two pieces of bread and the ham). By putting the three pieces of fruit far apart, you should see that you do not have any freedom to deal with a fourth volume. If you move from three dimensions to four, though ... | | | | --- | Previous page (The Weierstrass approximation theorem) | Contents | Next page (The boundedness theorem) | JOC September 2001
188825
https://www.researchgate.net/publication/273403825_Fermat's_point_from_five_perspectives
Published Time: 2015-04-01 (PDF) Fermat's point from five perspectives Article PDF Available Fermat's point from five perspectives International Journal of Mathematical Education in Science and Technology April 2015 46(3) DOI:10.1080/0020739X.2014.979894 Authors: Jungeun Park University of Delaware Alfinio Flores University of Delaware Download full-text PDFRead full-text Download full-text PDF Read full-text Download citation Copy link Link copied Read full-textDownload citation Copy link Link copied Citations (8)References (24)Figures (22) Abstract and Figures The Fermat point of a triangle is the point such that minimizes the sum of the distances from that point to the three vertices. Five approaches to study the Fermat point of a triangle are presented in this article. First, students use a mechanical device using masses, strings and pulleys to study the Fermat point as the one that minimizes the potential energy of the system. Second, students use soap films between parallel planes connecting three pegs. The tension on the film will be minimal when the sum of distances is minimal. Third, students use an empirical approach, measuring distances in an interactive GeoGebra page. Fourth, students use Euclidean geometry arguments for two proofs based on the Torricelli configuration, and one using Viviani's Theorem. And fifth, the kinematic method is used to gain additional insight on the size of the angles between the segments joining the Fermat point with the vertices. … … … … +17 … Figures - uploaded by Jungeun Park Author content All figure content in this area was uploaded by Jungeun Park Content may be subject to copyright. Discover the world's research 25+ million members 160+ million publication pages 2.3+ billion citations Join for free Public Full-text 1 Content uploaded by Jungeun Park Author content All content in this area was uploaded by Jungeun Park on Dec 28, 2015 Content may be subject to copyright. This article was downloaded by: [216.243.243.238] On: 20 November 2014, At: 16:31 Publisher: T aylor & Francis Informa Ltd Registered in England and Wales R egistered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Click for updates International Journal of Mathematical Education in Science and T echnology Publication details, including instructions for authors and subscription information: Fermat's point from five perspectives Jungeun Park a& Alfinio Flores a a Department of Mathematical Sciences, University of Delaware, Newark, USA Published online: 18 Nov 2014. T o cite this article: Jungeun Park & Alfinio Flores (2014): Fermat's point from five perspectives, International Journal of Mathematical Education in Science and T echnology, DOI: 10.1080/0020739X.2014.979894 T o link to this article: PLEASE SCROLL DOWN FOR ARTICLE T aylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, T aylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by T aylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. T aylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. T erms & Conditions of access and use can be found at and-conditions Downloaded by [216.243.243.238] at 16:31 20 November 2014 International Journal of Mathematical Education in Science and T echnology, 2014 CLASSROOM NO TE Fermat’s point from fiv e perspectives Jungeun Park and Alfinio Flores∗ Department of Mathematical Sciences, University of Delaware, New ark, USA (Received 7 March 2014) The Fermat point of a triangle is the point such that minimizes the sum of the distances from that point to the three vertices. Five approaches to study the F ermat point of a triangle are presented in this article. First, students use a mechanical device using masses, strings and pulleys to study the Fermat point as the one that minimizes the potential energy of the system. Second, students use soap films betw een parallel planes connecting three pegs. The tension on the film will be minimal when the sum of distances is minimal. Third, students use an empirical approach, measuring distances in an interactive GeoGebra page. F ourth, students use Euclidean geometry arguments for two proofs based on the T orricelli configuration, and one using Viviani’s Theorem. And fifth, the kinematic method is used to gain additional insight on the size of the angles between the segments joining the F ermat point with the vertices. Keywor ds: Fermat point; Torricelli configuration; soap films; minimal sum of distances Introduction The Fermat point of a triangle ABC is the point P such that the sum of the distances PA +PB +PC from that point to the three vertices is minimal (Figure 1). This problem can arise as a purely geometrical question, the wa y Fermat posed it in 1643, and Torricelli (1608–1647) solved it, or in contexts where reducing the sum of distances results in an advantage in terms of effort or cost, for example, locating a pow er plant serving three cities (assuming each city receives the same energy) or finding the place to anchor a boat that will drop the cable to retrieve three treasury chests from the bottom of the sea (assuming the chests have equal w eights). Solving the problem for three points using a mechanical device goes back to Lam´ e and Clapeyron in 1829. The history of minimizing the sum of distances to a set of more than three points starts with Gergonne in 1811. The letter from 1836 from Carl Friedrich Gauss to Schumacher about the minimum distance to four points describes Gauss’ idea of extending Fermat’s prob lem to more than three points in terms of finding a network of minimal length connecting the original points rather than finding a single point that minimizes the sum of distances. Gauss introduces additional points to form such networks, which later became known as Steiner points. This paper will deal mainly with triangles whose biggest angle is less than 120◦.W e address the case where one of the angles is more than 120◦in the Final Remarks section. Interested readers may find various methods and proofs in other sources, including w eighted cases, and more than three points.[5,p.194] ∗Corresponding author. Email: alfinio@udel.edu C 2014 Ta ylor & Francis Downloaded by [216.243.243.238] at 16:31 20 November 2014 2 Classroom Note Figure 1.Sum of distances to three vertices. In this article, students look at the properties of the point P inside a triangle and the corresponding segments and angles from five perspectives. First, students e xplore the problem using a mechanical device, using string and w eights. Second, students use soap films connecting three pins between parallel plates to represent the problem. Third, students use an interactive w eb page made with GeoGebra to experiment and verify the result by measuring. Fourth, students use classical Euclidean geometry arguments to pro ve properties of Fermat’s point and related segments. F inally, students use the kinematic method to gain additional insight about the angles of the segments joining Fermat’s point with the v ertices. By using different approaches and connections, students can develop a better understanding of Fermat’s point and the properties of the segments joining this point with the vertices of the triangle. 2.Minimizing the sum of distances to the vertices of a triangle with a physical device T o find the point P inside the triangle ABC, such that the sum of the distances AP +BP +CP is as small as possible, De Finetti suggests using a physical ar gument. One way to find this point is by drilling three holes on a rigid horizontal board corresponding to the positions of the three points A, B and C (Figure 2). W e tie three strings together at the mov able point P, pass one string through each one of the holes and hang equal masses from each string. The strings move freel y through the holes. For a computer simulation, see . The system of masses will be in equilibrium when the potential energy is minimal, that is, when the sum of distances from the masses to the floor is minimal. Therefore, the sum of the distances from the vertices to the masses is maximal. This implies that AP +BP +CP should be minimum, because the total length of the strings is constant. Because the magnitudes of the three forces acting along AP, BP and CP are equal, the resultant of two of them is on the angle bisector betw een them (Figure 3). And because the resultant of these two forces is in equilibrium with the third force, the line of action of this one is also the angle bisector. Thus P A is on the angle bisector of PB and PC, PB is on the angle bisector of P A and PC, and PC is on the angle bisector of P A and PB. Therefore, each pair of segments form an angle of 120◦at P (Figure 4). Poly a [8,p.148] uses a triangle on a vertical plane, and pulleys instead of holes for the same kind of argument. When the masses are hanged from the pulleys so that the angle between the strings connecting the knot with the pulleys is 120◦, the masses will remain in equilibrium (see Figure 5). If one of the masses is mov ed up or down, the system will tend to go back to the equilibrium position on its own. Because of friction, it ma y not completely go back to form exactly angles of 120◦. Downloaded by [216.243.243.238] at 16:31 20 November 2014 International Journal of Mathematical Education in Science and T echnology 3 Figure 2.Equal masses suspended from the vertices. 3.Using soap films to minimize sum of distances Using soap films is a great way for students to visualize minimal surfaces.[9,10] T o study Fermat’s point with soap films, tw o parallel transparent plates are attached with three pins perpendicular to the plates. The plates are submerged in a soap solution and slowly pulled out from the liquid. A soap film will be formed connecting the three pins (Figure 6). The tension along the soap film will be minimal when the total area of the film is minimal. Figure 3.T wo vectors with equal magnitudes and their resultant. Downloaded by [216.243.243.238] at 16:31 20 November 2014 4 Classroom Note Figure 4.Sum of distances to the three vertices is minimal. Because the width of the film is constant, this is equivalent to hav e a sum of distances that is minimal. When seen from above, y ou can see that the films join at a point inside the triangle and that the angle between the soap films is 120◦(see Figure 7). 4.Interactive website to minimize sum of distances Students can also experiment using GeoGebra and find the approximate position of the point inside the triangle that minimizes the sum of the distances to the vertices (Figure 8). Figure 5.Equal masses hanging from three pulleys. Downloaded by [216.243.243.238] at 16:31 20 November 2014 International Journal of Mathematical Education in Science and T echnology 5 Figure 6.Soap films minimize sum of lengths. Students will realize that when using experimental methods it is not alw ays possible to get exactly three angles of 120◦; ho wever, Guven, Cekmez and Karatas have pointed to the importance of the use of dynamical geometry programs to facilitate the transition from empirical data to deductive proof. An interactiv e web page where students can experiment is availab le at gebratube.org/student/m60133. Figure 7.Soap films viewed from abo ve. Downloaded by [216.243.243.238] at 16:31 20 November 2014 6 Classroom Note Figure 8.Empirical verification at interactive w ebsite. 5.Using Euclidean geometry arguments to locate Fermat’s point W e will find the position of the point that minimizes the sum of distances to the three vertices in two steps. W e will first show that the point has to be on the line that connects C with the outside vertex F of an equilateral triangle constructed on side AB (Figure 9). Let P be a point in the inside of triangle ABC, and triangle BAF an equilateral triangle. Construct the segment PB and let BPE be an equilateral triangle on this segment. Angle PBA is congruent to angle EBF because side BE is BP rotated 60◦and side BF is BA also rotated 60◦. Because PB is congruent to EB (sides of an equilateral triangle) and BA is congruent to BF, we ha ve that triangles BP A and BEF are congruent (because of side-angle-side). Therefore, segment P A is congruent to segment EF. The sum of distances P A+PB +PC is equal to the sum of the lengths of the segments FE+EP +PC. This sum will be minimal if points C, P, E and F are on the straight line CF. In the same w ay, the position of P that minimizes the sum of distances is on the line that connects A with the outside vertex of an equilateral triangle on BC (Figure 10). The intersection of these two lines is the desired position. W e will now prov e that the angle between segments AG and CF is 120◦using a rotation of 60◦. Rotate triangle ABG 60◦counterclockwise around B. Segment BG goes to BC, AB goes to FB and GA goes to CF. Therefore, the angle between GA and CF is 60◦, and angle APC is equal to 120◦. In the same way, we can pro ve that angle APB is 120◦. This means that quadrilateral APBF is an inscribed quadrilateral. The same is true for quadrilaterals APCH and BPCG (Figure 11). This means that the circumcircles of the equilateral triangles on the sides of the original triangle intersect at Fermat’s point (Figure 12). The configuration formed by the original triangle together with the three equilateral triangles on its sides is known as T orricelli’s configuration. An interactive T orricelli configuration is available at W e will now look at two alternativ e proofs. First, in Figure 13 angle EFB is congruent to angle DAB (corresponding angles of congruent triangles BEF and BDA). Both cover segment DB. Therefore, F and A are on the same circle, so that ADBF is a cyclical quadrilateral. Thus angle ADF is congruent to angle ABF and angle FDB is congr uent to angle BAF. Therefore angle ADB is 120◦. Downloaded by [216.243.243.238] at 16:31 20 November 2014 International Journal of Mathematical Education in Science and T echnology 7 Figure 9.Sum of distances is equal to length of path CDEF. Similar proofs can be designed with dynamic geometry activities, which guide stu- dents through the process of exploring, making conjectures and proving their conjec- tures.[5,p.108–114] For example, De V illiers created such activities with Geometer’s Sketchpad, through w hich students make conjectures about the intersection of three seg- ments connecting AG, CF and BF in Figure 11, and their lengths, and pro ve their conjectures that the segments are equal length using the property of congruent triangles, and that the segments intersect at one point using the property of cyclic quadrilaterals, inscribed in three circles in Figure 12.[5,p.108–114] Second, w e can also use Viviani’s Theorem, w hich states that the sum of distances of a point inside an equilateral triangle to its sides is equal to its height, to prove that the point P that minimizes AP +BP +CP is the point w here angle APB =angle BPC =angle CP A.I n Figure 14, P is the point where angle APB =angle BPC =angle CP A =120◦. Construct a perpendicular line to segment P A, and repeat the procedure for PB and PC. Then, triangle EFG formed from this procedure is equilateral because each angle is 60◦ (e.g. the sum of three angles in quadrilateral CPBE is angle ECP +angle EBP +angle BPC =90◦+90◦+120◦. Therefore, angle BEC =60◦). Then, to complete the proof, we will pick another point Pand show AP +BP +CP < AP+BP+CP. Draw a line passing through P that is perpendicular to side FG, and label the intersection point A. Similarly, construct Band Con GE and EF, respectively (Figure 15). Then, because APis the shortest distance between side FG and point P,A P≤ AP, and similarly, BP≤BPand CP≤CP. Because Pis different from P, not all three Downloaded by [216.243.243.238] at 16:31 20 November 2014 8 Classroom Note Figure 10.Location of point that minimizes the sum of distances. equalities hold. Therefore, AP+BP+CP<AP+BP+CP. Viviani’s Theorem gives AP +BP +CP =AP+BP+CP,and thus AP +BP +CP <AP+BP+CP. 6.Using the kinematic method The last approach is using the kinematic method. This method uses a constant relation between two v ector functions to find a similar relation between the velocities of the cor- responding endpoints of the vectors. And vice versa, if we kno w that a relation holds for the velocities of the endpoints for all times, we kno w that the same relation will hold for the vector functions (except possibly b y having to add a constant vector). This is similar to the case of real functions. For example, if tw o functions f and g are related by f(x)= kg(x) for all x, where k is a constant, then f(x)=kg(x) for all x; vice versa if f(x)=kg (x) for all x, then f(x)=kg(x)+c, where c is a constant. W e will use the case where a vector function r 1(t) is identical to the rotation of another vector function r 2(t) by a constant angle α. W e will express this relation by r 1=rot(α)r 2.L e t v 1(t) be the derivative of r 1(t), that is v 1(t)=lim h→0 r 1(t+h)−r 1(t) h(1) Downloaded by [216.243.243.238] at 16:31 20 November 2014 International Journal of Mathematical Education in Science and T echnology 9 Figure 11.Cyclic quadrilaterals. W e will use the following two theorems. Theorem 1:If r 1=rot(α)r 2 for all times, then v 1=rot(α)v 2 for all times (F igure 16). Proof: We ha v e lim h→0 r 1(t+h)−r 1(t) h= lim h→0 rot (α)r 2(t+h)−rot (α)r 2(t) h= lim h→0 rot (α)r 2(t+h)−r 2(t) h= rot (α)lim h→0 r 2(t+h)−r 2(t) h=rot (α)v 2 Thus v 1=rot(α)v 2. Theorem 2:Assume that the functions r 1 and r 2 are such that for all times their cor- responding velocities satisfy v 1=rot(α)v 2,then r 1=rot(α)r 2+k wher e k is a constant vector (Figur e 17). Downloaded by [216.243.243.238] at 16:31 20 November 2014 10 Classroom Note Figure 12.Intersection of circumcircles. Proof: W e will use the fact that if the derivati ve of a vector function is constant equal to zero, then the function is constant. Consider the function r 1−rot(α)r 2. Its derivativ e is v 1−rot(α)v 2. This is constant equal to zero, thus r 1−rot(α)r 2=k and r 1=rot(α)r 2+k. (For other theorems for the kinematic method see o r.) Students can use an interactive w eb page that illustrates the use of the kinematic method in relation to Fermat’s point. It is available at Triangles ACF and BCD are equilateral triangles on tw o sides of arbitrary triangle ABC (Figure 18). V ector AF is congruent with AC, and is rotated 60◦counterclockwise. W e can use theorem 1 to say that the velocity of F has the same magnitude as the velocity of C but is rotated 60◦to the left. V ector BD is congruent to BC and is rotated 60◦clockwise. Therefore, by theorem 1 the velocity of D has the same magnitude as the velocity of C, but is rotated 60◦to the right. Therefore, the velocity of D has the same magnitude as the velocity of F, rotated 120◦. W e can now use theorem 2 to conclude that BF =rot(120◦)AD +k.T os e e that k=0, we can start with triangle ABC as an equilateral triangle (Figure 19). In this case, it is clear that BF and AD are medians/altitudes/perpendicular bisectors/angle bisectors in the bigger equilateral triangle DFE. Segments AD and BF are thus congruent. Therefore, the angle between segments BF and AD is also 120◦. Downloaded by [216.243.243.238] at 16:31 20 November 2014 International Journal of Mathematical Education in Science and T echnology 11 Figure 13.Congruent triangles. Figure 14.Triangle EFG formed from the perpendicular lines of P A, PB, and PC (recreated from ). Downloaded by [216.243.243.238] at 16:31 20 November 2014 12 Classroom Note Figure 15.V erification of Fermat point using Viviani’s Theorem (recreated from ). 7.Final remarks The point of intersection of the three circumcircles of the equilateral triangles is called the T orricelli point of a triangle. If the largest angle in the triangle is 120◦or less, the T orricelli point coincides with the Fermat point. If the biggest angle in the original triangle is more than 120◦, the point that minimizes the sum of distances to the three vertices will be the vertex of this angle (e.g. C in triangle ABC in Figure 20). T o prove this, w e will show that for any point X other than C, A C+BC <AX +BX +CX. Let angle ACX be α, and angle BCX be β. Then, α+β=angle ACB >120◦(Figure 20). Now dra w a line passing through point X that is perpendicular to side AC and label the intersection point as E, and similarly construct F on side AB (Figure 21). Then, because Figure 16.Relation of vectors and relation of velocities. Downloaded by [216.243.243.238] at 16:31 20 November 2014 International Journal of Mathematical Education in Science and T echnology 13 Figure 17.Relation of velocities and relation of vectors. CXE is a right triangle, CE =CXcos α and CF =CXcos β. Because AC =AE +CE and BC =BF +CF,AC +BC =(AE +BF) +(CE +CF).The second term(CE +CF) is equal to CX cos α+CX cos β=CX(cos α+cos β)=2CX(cos α+β 2 cos α−β 2). Be- cause (α+β)=angle ACB >120◦,α+β 2>60◦, cos α+β 2<1 2 and 0<cos α−β 2≤1. Thus, (CE +CF) <CX. Therefore, AC+BC <AE +BF +CX <AX +BX +CX (note that AE <AX because AX is the hypotenuse of the right triangle AEX). Similarly, BF < BX. Note that if the biggest angle in the original triangle is more than 120◦,t h et h r e e circumcircles of the equilateral triangles will no longer intersect at the minimal point. Howe ver, the segments connecting the point of intersection J of the circumcircles of the Figure 18.Rotated velocities with the same magnitude. Downloaded by [216.243.243.238] at 16:31 20 November 2014 14 Classroom Note Figure 19.Initial position shows k=0. Figure 20.Triangle whose largest angle >120◦. three equilateral triangles with the two other vertices A and B will form an angle of 120◦ (Figure 22). Of course, there are other situations where the idea of Fermat point can be applied. Cases where the given points are on a straight line can serve as a good introduction problem before students explore several triangle cases described abo ve. Similar to triangle cases, real life contexts such as building a bus station or sunken treasures can motivate students, and both dynamic explorations and traditional proof can be used. Based on the observation that the sum of the distances from a point in an interval to the endpoints is constant, no Figure 21.T wo segments drawn from X perpendicular to A C and AB. Downloaded by [216.243.243.238] at 16:31 20 November 2014 International Journal of Mathematical Education in Science and T echnology 15 Figure 22.Three circumcircles intersecting at point J in case of largest angle >120◦. matter where that point is located,[2,p.101–103; 6,p.194] students may realize that when an odd number of points are given on a straight line, the point where the sum of the distances from all the original points is minimal is the middle point (median), whereas when an ev en number of points are given, the point that minimizes the sum of distances can be located anywhere betw een the two middle points among the original points. Students can also explore finding minimal configurations of lines in squares, pentagons or other geometrical figures. During the explorations, students can make conjectures based on their investigation on triangles, w here there is only one Fermat point such that the sum of the distance between the point and three vertices are minimal. In the case of a concave quadrilateral, students ma y find the case similar to the triangle whose biggest angle is more than 120◦because such a point in a concave quadrilateral is the vertex of the reflex angle.[5,p.194)] In the case of a conve x quadrilateral, students may realize a need for multiple additional points to find such an optimal path,[2,p.3] and choose between the optimal path and having one point that connects all vertices based on the real life situation and limited resources that they may ha ve. References Brazil M, Graham RL, Thomas DA, Zachariasen M. On the history of the Euclidean Steiner tree problem. Arch Hist Exact Sci. 2014;68:327–354. Downloaded by [216.243.243.238] at 16:31 20 November 2014 16 Classroom Note Dicken B. Sunken treasure. In: Gould H, Murray D, Sanfratello A, editors. Mathematical modeling handbook. Bedford, MA: The Consortium for Mathematics and its Applications COMAP; 2012. p. 99–106. Franksen OI, Grattan-Guinness I. The earliest contribution to location theory. Spacio-economic equilibrium with Lam´ e and Clapeyron (1829). Math Comput Simulation. 1989;31:195–220. De Villiers M. W eighted airport problem [Internet]. Dyn Math Learn. 2005. Availab le from: De Villiers M. Rethinking proof with Sketchpad. Emeryville, CA: Key Curriculum Press; 2003. De Finetti B. Die Kunst des Sehens in der Mathematik [The art of seeing in mathematics]. Basel: Birkh¨ auser; 1974. Kabal S, Geva y G. Polya’s mechanical model for the Fermat point [Internet]. 2008. A vailable from: yasMechanicalModelForTheFermatP oint/ Polya G. Mathematics and plausib le reasoning V ol. 1. Induction and analogy in mathematics. Princeton: Princeton University Press; 1954. Courant R. Soap film experiments with minimal surfaces. Am Math Monthly. 1940;47:167– 174. Hoffman DT. Smart soap bubbles can do calculus. Math Teach. 1979;72:377–385, 389. Guven B, Cekmez E, Karatas I. Using empirical evidence in the process of proving: the case of dynamic geometry. Teach Math Appl. 2010;29:193–207. Coxeter HSM, Greitzer SL. Geometry revisited. New Y ork, NY: Random House; 1967. W oltermann M. Fermat’s problem for T orricelli [Inter net]. 2014. A vailable from: washjeff.edu/users/mw oltermann/Dor rie/91.pdf. Ly´ ubich Y uI, Shor LA. M´ etodo cinem´ atico en problemas geom´ etricos [Kinematic method in geometrical problems]. Moscow: Mir; 1978. Flores A. The kinematic method and the Geometer’s Sketchpad in geometrical problems. Int J Comput Math Learn. 1998;3:1–12. Downloaded by [216.243.243.238] at 16:31 20 November 2014 Citations (8) References (24) ... GeoGebra is a valuable tool for students to use spreadsheet facilities, graphical and algebraic facilities at the same time. It is very user-friendly and there are many pieces of research supporting the value of Geogebra in mathematic learning and instruction (Çekmez, 2023;Çeziktürk & Hangül, 2022;Çeziktürk & Yavuz, 2019;Hangül & Cezikturk, 2020;McIntyre et al., 2016;Park & Flores, 2015;Ponce Campuzano et al., 2019). ... ... They also lead to new epistemologies and reshape the cultural nature of mathematics (Ponce Campuzano et al., 2019). Park and Flores (2015) measured distances in an interactive GeoGebra page as our example inquiry on the Steiner Ellipse. They stress the students' transition from empirical data to deductive proof. ... ... They stress the students' transition from empirical data to deductive proof. In parallel to Park and Flores (2015), Guven et al. (2010) point to the value of the insight that can be gained by empirical arguments and the deductive thinking associated with it. Visual proofs are not valued as formal as real proofs but pedagogically, they may open new lines of thought connecting empirical and deductive thinking if enabled. ... Mathematical inquiry with the Steiner ellipse using GeoGebra and numerical analysis Article Sep 2023 Int J Math Educ Sci Tech Ozlem Cezikturk analytic geometry, Steiner ellipse, explorations via Geogebra and numerical analysis View Show abstract ... Several geometric solutions are known and an excellent review is presented in the Wikipedia's Fermat point article. Additionally Park and Flores present five approaches to study the Fermat point using amongst others a mechanical device and even soap films. ... ... The aim of this work is to obtain the Cartesian coordinates of the Fermat-Torricelli point for an arbitrary triangle, with no internal angle greater than 120 • , using only calculus and algebraic concepts, which in a way complements the work of Park and Flores. The authors believe that their contribution will be to show that solving the system of two non-linear simultaneous equations, although in theory could be made 'by hand', in practice it is very laborious to carry out without the aid of mathematical computer software. Furthermore, it will be also shown that the software alone cannot solve the problem. ... ... At first glance, one could think that it is not so difficult to solve simultaneously any two of Equations (6). However, from the several mathematical computer programs available to authors, only Wolfram Research Mathematica R 6 and 7 (but not 8, 9 or 10) were able to find a solution, although such a solution is so enormously long (about 50 notebook pages, even for the case when xa, ya and yb are zero) that the authors did not try to simplify it. ... An algebraic approach to finding the Fermat–Torricelli point Article Full-text available Apr 2015 Oscar Luis Palacios-Vélez Felipe Pedraza-Oropeza Bernardo Samuel Escobar-Villagrán Using a calculus and an algebraic approach, the Cartesian coordinates of the Fermat-Torricelli point are deduced for triangles with no internal angle greater than 120°. Although in theory, the deduction of these coordinates could be made ‘by hand’, in practice it is very laborious to obtain them without the aid of mathematical computer software, but with human guidance, since there are mathematical artifices not yet incorporated into the software. It is also shown that these coordinates can be conveniently expressed in terms of the side lengths and the area of the triangle. These coordinates are contrasted with the coordinates of a similar point: one whose sum of the squares of the distances to the vertices of an arbitrary triangle is a minimum. View Show abstract ... In 1643 he posed it 5 to Torricelli as a challenge, that states: "Let he who does not approve of my method attempt the solution of the following problem: Given three points in a plane, find a fourth point such that the sum of its distances to the three given points is a minimum". Torricelli solved the problem using a simple geometric method . The point came to be known by different names: Fermat point, Torricelli point, Fermat-Torricelli-point. ... ... We give here only a brief account of Torricelli's method because it is well known . A, B, C are three given non collinear points. ... The fallacy of Fermat-Torricelli point Preprint Full-text available Oct 2024 Radhakrishnamurty Padyala Femat-Torricelli point is well known. It was a solution to a challenge posed by Fermat to Torricelli to find a fourth point P, from which the sum of the distances to three given noncollinear points A, B, C, is a minimum. Torricelli solved the problem using a simple geometric method. We show in this article that Torricelli's solution is invalid because it violates the fundamental Viviani's theorem in geometry. View Show abstract ... each vertex in the neighbourhood of do If is not visited then ExploreTree( , ) End for End Algorithm 3: CheckingBiconnectivity Input: : A Graph Output: : A Boolean to indicate if the graph is Biconnected; : The set of cutting For each vertex in the of do ′ = \ { } If not CCDFS( ′) then =False = ∪ { } End for Return , End Algorithm 4: CheckingTriconnectivity Input: : A Graph Output: : A Boolean to indicate if the graph is Triconnected; For each edge in the of do ′ = \ { } If not CCDFS( ′) then =False = ∪ { } End for Return , EndR is the transmission range of router in Wireless networks. The Fermat point is computed using equation 1. The establishment of links at the step 2.f and 3.b is done by checking the distance between the Fermat point and a closest point ' (step 2.f) or between the closest points (step 3.b). ... Fault-Tolerant Placement of Additional Mesh Nodes in Rural Wireless Mesh Networks: A Minimum Steiner Tree Based Centre of Mass With Bounded Edge Length Article Full-text available Aug 2021 Jean Louis Kedieng Ebongue Fendji Patience Leopold Bagona Wireless mesh networks are presented as an attractive solution to reduce the digital divide between rural and developed areas. In a multi-hop fashion, they can cover larger spaces. However, their planning is subject to many constraints including robustness. In fact, the failure of a node may result in the partitioning of the network. The robustness of the network is therefore achieved by carefully placing additional nodes. This work tackles the problem of additional nodes minimization when planning bi and tri-connectivity from a given network. We propose a vertex augmentation approach inspired by the placement of Steiner points. The idea is to incrementally determine cut vertices and bridges in the network and to carefully place additional nodes to ensure connectivity, bi and tri-connectivity. The approach relies on an algorithm using the centre of mass of the blocks derived after the partitioning of the network. The proposed approach has been compared to a modified version of a former approach based on the Minimum Steiner Tree. The different experiments carried out show the competitiveness of the proposed approach to connect, bi-connect, and tri-connect the wireless mesh networks. View Show abstract ... The Fermat point of a triangle is the point P that minimizes the sum of the distances from P to the three vertices of the triangle ( Park & Flores, 2015). If the largest angle of the triangle is less than 120˚ Fermat point will be inside the triangle. ... Soap films and GeoGebra in the study of Fermat and Steiner points Article Full-text available Oct 2017 Alfinio Flores Jungeun Park We discuss how mathematics and secondary mathematics education majors developed an understanding of Fermat points for the triangle as well as Steiner points for the square and regular pentagon, and also of soap film configurations between parallel plates where forces are in equilibrium. The activities included the use of soap films and the interactive geometry program GeoGebra. Students worked in small groups using these tools to investigate the properties of Fermat and Steiner points and then justified the results of their investigations using geometrical arguments. These activities are specific approaches of how to encourage prospective teachers to use physical experiments to support students’ development of mathematical curiosity and mathematical justifications. View Show abstract The Fermat and the isodynamic points in optimal positioning by resection in surveying from three fixed points Article Mar 2025 SURV REV Aristeidis Fotiou Ioannis Fotiou This study investigated the optimisation of resection in surveying for two-dimensional positioning, utilising distances and angles from three fixed points. The optimal location of the resected points was analysed in terms of precision. Isotropic conditions for a circular error ellipse with a potentially minimum radius/error are introduced, presenting new findings regarding optimal resection points. For distance observations, the Fermat points or the isogonic centres of the fixed triangle represent the optimal points with the least isotropic error. For angle observations, the optimal isotropic location was identified as the first isodynamic point. View Show abstract Research trends in dynamic geometry software: A content analysis from 2005 to 2021 Article Full-text available May 2021 Rabia Nur Öndeş Uğurenver Dynamic geometry software (DGS), especially GeoGebra, have been used in mathematics lessons around the world since it enables a dynamic learning environment. To date, there exist so many published researches about DGS, which leads to the need for meaningful organisation. This study aims to give a broad picture about researches related to DGS. For this reason, 210 articles accessed from the Web of Science database were analysed in terms of their purpose, research design, sample level, sample size, data collection tools and data analysing methods by using the content analysis method. According to the findings, for each section the most frequently used ones were as follows: ‘the effect of DGS on something’ as a purpose, qualitative method as a research design, high school students as a sample level, 101–300 intervals as a sample size, documents and achievement tests as instruments and descriptive analysis for quantitative and qualitative studies. These results can help researchers to see the past trends in DGS and conduct new studies. Keywords: Dynamic geometry software, DGS, GeoGebra, content analysis, mathematics education View Show abstract Mathematical modeling of a 3-CUP parallel mechanism using the Fermat point Article Jul 2021 MECH MACH THEORY Omar Martínez Israel Soto Ricardo Campa This document is devoted to the analysis of the pose and velocity kinematics, as well as of the dynamics, for a parallel mechanism with three degrees of freedom, of the type known as 3-CUP. This mechanism is constituted by two triangular rigid bodies, one fixed (base) and other mobile (platform), connected to each other by three kinematic chains, each with a cylindrical (C), a universal (U), and a prismatic (P) joint. A novel approach is used for kinematic analysis, as well as for obtaining the dynamic equations of motion of the system. In such approach, tools of geometry, vector analysis, and mainly the Fermat point are employed to obtain a closed solution for the forward kinematics problem for this kind of parallel robot with variable geometry triangular mobile platform. At the end, the analytical kinematics and dynamics models obtained are compared, via simulations, with the corresponding models computed using commercial software; the results allow to validate the approach. View Show abstract Rethinking Proof with Geometer's Sketchpad Book Full-text available Jan 2012 Michael De Villiers Rethinking Proof harnesses the power of Dynamic Geometry to engage students in proving conjectures and thinking about proof in different ways. Rethinking Proof offers other motivations for proof besides verification, including explanation, discovery, challenge, and systemization. In the activities, students work with SketchpadTM sketches to make conjectures. Carefully constructed questions guide students to develop explanations of why their conjectures are true. Comes with extensive teacher's notes, and includes a download link containing pre-made Sketchpad sketches. A link to download Sketchpad for free is also provided. View Show abstract On the history of the Euclidean Steiner tree problem Article Full-text available May 2014 Marcus Brazil Ronald L. Graham Doreen A Thomas Martin Zachariasen The history of the Euclidean Steiner tree problem, which is the problem of constructing a shortest possible network interconnecting a set of given points in the Euclidean plane, goes back to Gergonne in the early nineteenth century. We present a detailed account of the mathematical contributions of some of the earliest papers on the Euclidean Steiner tree problem. Furthermore, we link these initial contributions with results from the recent literature on the problem. View Show abstract Using empirical evidence in the process of proving: The case of dynamic geometry. Teaching Mathematics and Its Applications, 29(4), 193-207 Article Full-text available Nov 2010 Teach Math Appl Bulent Guven Erdem Cekmez İlhan Karataş With the emergence of Dynamic Geometry Software (DGS), a theoretical gap between the acquisition (inductive) and the justification (deductive) of a mathematical statement has started a debate. Some educators believe that deductive proof in geometry should be abandoned in favour of an experimental approach to mathematical justification. This article provides some indications of how empirical arguments can be used to gain an insight into a proof through investigation and experimentation in a dynamic geometry environment, and shows that DGS can link the inductive and the deductive parts of thinking. Our observations point to the importance of conducting further research to determine the pedagogical principles that will contribute to the successful design of environments that facilitate the transition of students from empirical data to deductive proof. View Show abstract Smart Soap Bubbles Can Do Calculus Article May 1979 Dale T. Hoffman The problems of finding minimum distances, times, and costs are solved with soap bubble models. View Show abstract Geometry Revisited Book Mar 1967 H. S. M. Coxeter Samuel L. Greitzer View Die Kunst des Sehens in der Mathematik. Aus dem Italienischen übersetzt von Lulu Bechtolsheim Article Jan 1974 Bruno de Finetti View Geometry Revisited Book Jan 1967 H. S. M. Coxeter S. L. Greitzer View The Kinematic Method and the Geometer's Sketchpad in Geometrical Problems Article Jul 1998 Int J Comput Math Learn Alfinio Flores View The earliest contribution to location theory? Spatio-economic equilibrium with Lamé and Clapeyron, 1829 Article Jul 1989 MATH COMPUT SIMULAT Ole Immanuel Franksen Ivor Grattan-Guinness View Induction and Analogy in Mathematics Article Jan 1954 G. Polya View Show more Recommended publications Discover more Article Ab initio study of the magnetic structure of fcc Fe grown on a Cu(001) substrate July 2004 · Physical review. B, Condensed matter B. Yu. Yavorsky Peter Zahn Ingrid Mertig First-principle total energy studies of Fe films on a Cu(001) substrate are presented. The films are modeled by symmetric 6Fe/8Cu/6Fe slabs. Both collinear and noncollinear magnetic configurations are considered. The effect of the surface relaxation on the total energy of the system is discussed. It was found that the energy difference between the noncollinear configurations and the favored ... [Show full abstract] collinear double-layered antiferromagnetic state is reduced by contraction of the Fe sublayer but still remains positive. The possibility of stable noncollinear magnetic configurations in the system is discussed. Read more Article Growth Model of CuGaSe2 Grains in a Cu-Rich/Cu-Poor Bailer Process November 2003 · Journal of Applied Physics S. Nishiwaki Susanne Siebentritt Marcel Giersig Martha Ch. Lux-Steiner Cu-rich Cu-Ga-Se films were prepared on a Mo coated soda-lime glass substrate by thermal codeposition and the microstructures were characterized. An increase in the grain size of CuGaSe2 (CGS) as the Cu-rich composition approached stoichiometry was observed. In the Cu-rich film, precipitated Cu-Se grains were located between the CGS grains and were connected to the film surface, which suppressed ... [Show full abstract] the lateral growth of the CGS grains. On the film surface, a quasi-epitaxial growth of Cu2-xSe grains was found. In order to investigate the development of the microstructure together with the phase change from Cu-Se to CGS, a small amount of Ga and Se were deposited onto Cu-rich Cu-Ga-Se films. By transmission electron microscopy, Cu1.8Se and Cu1.5Se grains were identified in the Cu-rich film. A coherent lattice structure from the CGS into the Cu1.5Se grains was found. The microstructural development of the Cu-rich Cu-Ga-Se film is discussed and models of the growth mechanism are proposed. (C) 2003 American Institute of Physics. Read more Article Full-text available Origin of the magnetic reorientation transition in Fe∕ Cu_ {3} Au (001) June 2004 · Physical Review B Silvia Gallego Laszlo Szunyogh P. Weinberger M. C. Muñoz The origin of the magnetic reorientation transition in ultrathin Fe films on Cu3Au(001) is investigated in terms of ab-initio calculations of the magnetic anisotropy energy of the system. We find that this reorientation transition is mainly determined by two factors, namely (1) segregation of Au into the Fe film, and (2) relaxation of the interlayer distances with respect to the semi-infinite ... [Show full abstract] substrate, whereby the balance between these two effects crucially depends on the thickness of the Fe slab. View full-text Article Nuclear Energy Systems and Functional Materials May 1998 · Journal of the Atomic Energy Society of Japan / Atomic Energy Society of Japan Y Ito Hirotake MORIYAMA Shinsuke Yamanaka [...] Shunsuke UCHIDA Read more Last Updated: 06 Aug 2025 Discover the world's research Join ResearchGate to find the people and research you need to help your work. Join for free ResearchGate iOS App Get it from the App Store now. Install Keep up with your stats and more Access scientific knowledge from anywhere or Discover by subject area Recruit researchers Join for free LoginEmail Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? - [x] Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · HintTip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? - [x] Keep me logged in Log in or Continue with Google No account? Sign up Company About us News Careers Support Help Center Business solutions Advertising Recruiting © 2008-2025 ResearchGate GmbH. All rights reserved. Terms Privacy Copyright Imprint Consent preferences
188826
https://www.dentaljuce.com/shorts-hypercalcaemia
Hypercalcaemia: symptoms, signs, management, MCQs Home Courses News Help Contact Us Register Search Search Enhanced Verifiable CPD from the Home Shorts Hypercalcaemia Dentaljuce Shorts: 500 words, 10 MCQs, on general medicine and surgery. Hypercalcaemia Hypercalcaemia is a condition characterised by an abnormally high calcium (Ca2+) level in the blood serum. The normal range for calcium in the blood is between 2.1 and 2.6 mmol/L. Hypercalcaemia is defined by levels greater than 2.6 mmol/L. Signs and Symptoms Patients with mild increases in calcium levels often remain asymptomatic, particularly if the rise is gradual. However, those with more significant elevations or rapid onset may exhibit various symptoms. The mnemonic "Stones, Bones, Groans, Moans, Thrones, and Psychiatric Overtones" can be useful: Stones: Kidney stones or biliary stones. Bones: Bone pain due to the effects on bone metabolism. Groans: Abdominal discomfort and gastrointestinal symptoms. Moans: Non-specific complaints such as fatigue and weakness. Thrones: Polyuria and constipation. Psychiatric Overtones: Depression, anxiety, and cognitive disturbances. Other symptoms may include cardiac arrhythmias, fatigue, nausea, vomiting, and decreased muscle tone. Hypercalcaemic Crisis A hypercalcaemic crisis is a medical emergency typically involving calcium levels above 14 mg/dL (3.5 mmol/L). Symptoms include oliguria or anuria, somnolence, or even coma. Urgent measures to lower serum calcium are very important, often involving intravenous hydration, calcitonin, and bisphosphonates. Causes Primary hyperparathyroidism and malignancy account for the majority of hypercalcaemia cases. Other causes include: Parathyroid Function: Solitary parathyroid adenoma, primary hyperparathyroid hyperplasia, and certain genetic conditions. Cancer: Tumours that release parathyroid hormone-related protein (PTHrP), and local osteolytic hypercalcaemia from bone metastasis. Vitamin-D Disorders: Hypervitaminosis D, sarcoidosis, and other granulomatous diseases. High Bone-turnover: Hyperthyroidism, multiple myeloma, prolonged immobilisation, and Paget's disease. Kidney Failure: Tertiary hyperparathyroidism and milk-alkali syndrome. Others: Adrenal insufficiency, acromegaly, and excessive calcium consumption. Micrograph of ovarian small cell carcinoma of the hypercalcaemic type. Diagnosis Diagnosis involves measuring either corrected calcium or ionised calcium levels and confirming the diagnosis after a week. A detailed history and examination are essential to identify underlying causes. Laboratory tests include intact parathyroid hormone (iPTH), parathyroid hormone-related protein (PTHrP), calcitriol, and calcifediol levels. An Osborn wave, an abnormal EKG tracing that can be associated with hypercalcaemia. Treatments The primary goal is to treat the hypercalcaemia and subsequently address the underlying cause. Immediate treatment is required for calcium levels above 13 mg/dL or rapidly rising levels. Fluids and Diuretics Initial therapy involves intravenous (IV) fluids to correct dehydration and enhance renal calcium excretion. Following rehydration, loop diuretics such as furosemide can be used to maintain a high urine output while preventing fluid overload. Bisphosphonates and Calcitonin Bisphosphonates such as pamidronate and zoledronate inhibit bone resorption by osteoclasts and are often the first-line treatment for hypercalcaemia of malignancy. Calcitonin can also be used for immediate but short-term reduction of calcium levels. Other Therapies Other therapies include denosumab, glucocorticoids for vitamin D-related hypercalcaemia, and dialysis in severe cases. Phosphate therapy and rarely used drugs like plicamycin and gallium nitrate may also be considered. Self-assessment MCQs (single best answer) What is the normal range for calcium in the blood? 1.5 to 2.0 mmol/L No, this range is too low. The normal range for calcium in the blood is higher. 2.1 to 2.6 mmol/L Well done! This is correct. The normal range for calcium in the blood is 2.1 to 2.6 mmol/L. 2.7 to 3.2 mmol/L No, this range is too high. The normal range for calcium in the blood is lower. 1.0 to 1.5 mmol/L No, this range is too low. The normal range for calcium in the blood is higher. 3.0 to 3.5 mmol/L No, this range is too high. The normal range for calcium in the blood is lower. score 0 / 1. running total: 0 / 0 What mnemonic is useful for remembering the symptoms of hypercalcaemia? ABCDE No, ABCDE is typically used for primary survey in trauma patients. It does not relate to hypercalcaemia symptoms. VITALS No, VITALS is not related to the symptoms of hypercalcaemia. MARCH No, MARCH is not used for hypercalcaemia symptoms. Stones, Bones, Groans, Moans, Thrones, and Psychiatric Overtones Well done! This is correct. This mnemonic helps remember the symptoms of hypercalcaemia. CALCIUM No, CALCIUM is not a commonly used mnemonic for hypercalcaemia symptoms. score 0 / 1. running total: 0 / 0 Which symptom is NOT typically associated with hypercalcaemia? Kidney stones No, kidney stones are a common symptom of hypercalcaemia. Bone pain No, bone pain is a common symptom of hypercalcaemia. Abdominal discomfort No, abdominal discomfort is a common symptom of hypercalcaemia. Increased muscle tone Well done! This is correct. Increased muscle tone is not typically associated with hypercalcaemia. Depression No, depression is a common symptom of hypercalcaemia. score 0 / 1. running total: 0 / 0 What is considered a medical emergency in hypercalcaemia? Calcium levels above 2.6 mmol/L No, while elevated, 2.6 mmol/L is not considered a medical emergency. Calcium levels above 3.0 mmol/L No, while elevated, 3.0 mmol/L is not considered a medical emergency. Calcium levels above 3.5 mmol/L Well done! This is correct. Calcium levels above 3.5 mmol/L are considered a medical emergency. Calcium levels above 4.0 mmol/L No, while this is a very important level, the threshold for emergency is lower at 3.5 mmol/L. Calcium levels above 4.5 mmol/L No, while this is an extremely very important level, the threshold for emergency is lower at 3.5 mmol/L. score 0 / 1. running total: 0 / 0 Which of the following is NOT a common cause of hypercalcaemia? Primary hyperparathyroidism No, primary hyperparathyroidism is a common cause of hypercalcaemia. Malignancy No, malignancy is a common cause of hypercalcaemia. Hypervitaminosis D No, hypervitaminosis D is a common cause of hypercalcaemia. Hypothyroidism Well done! This is correct. Hypothyroidism is not a common cause of hypercalcaemia. Prolonged immobilisation No, prolonged immobilisation is a common cause of hypercalcaemia. score 0 / 1. running total: 0 / 0 What initial therapy is often used to treat hypercalcaemia? Intravenous fluids Well done! This is correct. Intravenous fluids are often used as the initial therapy for hypercalcaemia. Oral calcium supplements No, oral calcium supplements are not used to treat hypercalcaemia. Antihypertensive medication No, antihypertensive medications are not used to treat hypercalcaemia. Insulin therapy No, insulin therapy is not used to treat hypercalcaemia. Antidepressants No, antidepressants are not used to treat hypercalcaemia. score 0 / 1. running total: 0 / 0 Which class of drugs inhibits bone resorption by osteoclasts and is often used to treat hypercalcaemia of malignancy? Bisphosphonates Well done! This is correct. Bisphosphonates inhibit bone resorption by osteoclasts and are often used to treat hypercalcaemia of malignancy. Beta-blockers No, beta-blockers do not inhibit bone resorption by osteoclasts. Antihistamines No, antihistamines do not inhibit bone resorption by osteoclasts. NSAIDs No, NSAIDs do not inhibit bone resorption by osteoclasts. Antibiotics No, antibiotics do not inhibit bone resorption by osteoclasts. score 0 / 1. running total: 0 / 0 What laboratory test is NOT typically used to diagnose hypercalcaemia? Intact parathyroid hormone (iPTH) No, intact parathyroid hormone (iPTH) is commonly used to diagnose hypercalcaemia. Parathyroid hormone-related protein (PTHrP) No, parathyroid hormone-related protein (PTHrP) is commonly used to diagnose hypercalcaemia. Calcitriol levels No, calcitriol levels are commonly used to diagnose hypercalcaemia. Calcifediol levels No, calcifediol levels are commonly used to diagnose hypercalcaemia. Blood glucose levels Well done! This is correct. Blood glucose levels are not typically used to diagnose hypercalcaemia. score 0 / 1. running total: 0 / 0 Which of the following treatments is used for immediate but short-term reduction of calcium levels? Denosumab No, Denosumab is not typically used for immediate reduction of calcium levels. Glucocorticoids No, glucocorticoids are not typically used for immediate reduction of calcium levels. Calcitonin Well done! This is correct. Calcitonin is used for immediate, but short-term reduction of calcium levels. Phosphate therapy No, phosphate therapy is not typically used for immediate reduction of calcium levels. Dialysis No, dialysis is not typically used for immediate reduction of calcium levels. score 0 / 1. running total: 0 / 0 In animals, what is a common cause of hypercalcaemia in grazing animals? Ingestion of specific plants like Trisetum flavescens Well done! This is correct. Ingestion of specific plants like Trisetum flavescens is a common cause of hypercalcaemia in grazing animals. Viral infections No, viral infections are not a common cause of hypercalcaemia in grazing animals. Lack of exercise No, lack of exercise is not a common cause of hypercalcaemia in grazing animals. Exposure to extreme cold No, exposure to extreme cold is not a common cause of hypercalcaemia in grazing animals. Genetic disorders No, genetic disorders are not a common cause of hypercalcaemia in grazing animals. score 0 / 1. running total: 0 / 0 Dentaljuce Dentaljuce provides Enhanced Continuing Professional Development (CPD) with GDC-approved Certificates for dental professionals worldwide. Founded in 2009 by the award-winning Masters team from the School of Dentistry at the University of Birmingham, Dentaljuce has established itself as the leading platform for online CPD. With over 100 high-quality online courses available for a single annual membership fee, Dentaljuce offers comprehensive e-learning designed for busy dental professionals. The courses cover a complete range of topics, from clinical skills to patient communication, and are suitable for dentists, nurses, hygienists, therapists, students, and practice managers. Dentaljuce features Dr. Aiden, a dentally trained AI-powered personal tutor available 24/7 to assist with queries and provide guidance through complex topics, enhancing the learning experience. Check out our range of courses, or sign up now! Membership Options Dentaljuce offers a range of membership options… Regular Membership With enhanced CPD Certificates. Dentaljuce is brought to you by the award winning Masters team from the School of Dentistry, University of Birmingham, UK. All have won awards for web based learning and teaching and are recognised as leaders and innovators in this field, as well as being highly experienced clinical teachers. Full access to over 150 courses, no extras to pay. Buy Now £89.00 per year Get Price Sign up Student Membership No Certificates. Sample Dentaljuce for just £4.99 a year! Perfect if you don't need certificates, or you want to explore everything Dentaljuce has to offer before committing to full membership. Our low-cost Student/Starter option gives you full access to all 150+ courses at an unbeatable price. Buy Now £4.99 per year Get Price Sign up Very good, detail excellent, very clear to use. JM Home Who is Dentaljuce? Contact Us Shorts Register Regular Membership Student Membership More Info CPD for Dental Nurses Enhanced Verifiable CE / CPD: Overview New to the UK Dental Profession? An essential guide. The Overseas Registration Examination - ORE Help & FAQs Answering questions Quality Controls Measuring CPD Time © Dentaljuce 2025 | Terms & Conditions | Privacy Policy × Close Recording CPD time: recorded. This website is using cookies. We use them to give you the best experience. If you continue using our website, we'll assume that you are happy to receive all cookies on this website. ContinueLearn more x
188827
https://yourthriftycoteacher.com/fun-area-and-perimeter-activities-for-teaching/
Fun Area and Perimeter Activities for Teaching 4th and 5th Grade - Your Thrifty Co-Teacher Ready to receive FREE resources and engaging teaching ideas? Your Thrifty Co-Teacher A Teaching Blog Menu Home About Me Blog Contact Amazon Favorites Shop Fun Area and Perimeter Activities for Teaching 4th and 5th Grade Updated: September 3, 2025 Let’s be honest. Teaching area and perimeter can feel like a never-ending cycle of reminders and “Wait, which one is which again?” That’s why fun area and perimeter activities that start simple and build in complexity can make these concepts stick. When students get to manipulate, measure, and do the math, they’re far more likely to remember what it actually means. This post walks through: What is area? What is perimeter? What are some scaffolded, hands-on activities that help students make sense of both? What is area? Area is the amount of space inside a shape. Imagine how many square tiles you’d need to completely cover the floor of a room. We typically measure area using square units like square inches, square feet, or square centimeters. In 4th and 5th grade, students are often introduced to area through rectangular shapes and later apply that understanding to irregular figures. To calculate area, students multiply the length by the width (for rectangles) or count square units in a grid. What is perimeter? Perimeter is the distance around the outside of a shape. It’s like measuring the length of a fence around a yard or the trim around a bulletin board. To calculate perimeter, students add the lengths of all the sides. For rectangles and squares, many students use a formula like:Perimeter = 2 × (length + width) or Perimeter = l + l + w + w While the formula is important, so is the conceptual understanding that perimeter is a total distance. It’s something we can walk, measure, or outline. What are some hands-on activities for teaching area and perimeter? The following activities are organized in a scaffolded sequence. They begin with hands-on exploration, move into real-world application, and end with a project-based design challenge. Each step builds on the last, helping students make meaningful connections along the way. 1. Teaching area and perimeter with Post-It notes Begin with 4 square Post-It notes. Have students arrange them into a 2 × 2 square and then: Count the area in square units → The area is 4 square units (2 rows × 2 columns = 4) Measure the perimeter by counting the length of the sides → The perimeter is 8 units (each side is 2 units long; 2 + 2 + 2 + 2 = 8) Next, ask students to rearrange the same 4 Post-It notes into a 1 × 4 rectangle (or 4 × 1). Then: Count the area in square units: area stays the same → Still 4 square units Measure the perimeter by counting the length of the sides: perimeter changes → Now the rectangle has sides that are 1 unit and 4 units → The perimeter is 10 units (4 + 1 + 4 + 1 = 10) Teaching Point: Even though the area stays the same, the perimeter changes based on how the shape is arranged. This is a great way to help students see that area and perimeter don’t always grow or shrink together—they measure different things. Once students are confident, increase the number: 6, 8, 12, or more. Ask them: How many different rectangles can you build? Which one has the greatest area? Which one has the smallest perimeter? This task encourages students to manipulate dimensions and see how area and perimeter relate—and don’t always increase together. It’s a simple, visual, and low-prep way to introduce both concepts without relying on formulas first. 2.Classroom Measurement Hunt: A real-world activity for practicing area and perimeter Give students square-inch tiles or paper squares and challenge them to measure items around the room. They can estimate or calculate the: Area by covering surfaces with square units Perimeter using rulers or measuring tape Sample classroom items: A notebook or workbook cover A sticky note A small whiteboard The top of a student desk For extra engagement, turn it into a math scavenger hunt: “Find three items with a perimeter greater than 15 inches and an area smaller than 20 square inches.” This type of activity makes abstract math feel useful and tangible, reinforcing that these skills show up in everyday places. 3.Design a Bedroom Project:A more advanced project for teaching area and perimeter This activity is a meaningful way to bring together everything students have learned about area and perimeter—all in one hands-on, student-led task. Students begin with a blank grid and a design challenge: Include a bed, dresser, and two nightstands Draw each furniture piece as a rectangle on the grid Color-code each item and label it clearly in a map key Calculate the area and perimeter of each item and record it in the key Because students are working on graph paper, they’re applying their understanding directly as they plan and build. It’s more than just plugging numbers into formulas. It’s spatial reasoning, design thinking, and measurement practice all in one. This project works well as a culminating activity and can be easily adapted to fit your class’s needs. Consider adding extensions like: A total square-unit “floor space” budget Specific perimeter requirements for certain items A written math reflection: Which item had the largest area? Which had the longest perimeter? Why? The Design-a-Bedroom project, along with printable anchor charts, practice pages, word problems, and a review quiz, is included in this ready-to-use area and perimeter resource on TpT. How do these activities support student learning? Scaffolded area and perimeter activities like these help students move from concrete understanding to application. By starting with square units and building toward real-world design, students develop a stronger grasp of both concepts. Tweet Share Share Pin 2 2 Shares Filed Under: Math « Red Ribbon Week Activities for Elementary Students Picture Books About Matter for Upper Elementary » Leave a Reply Cancel reply Your email address will not be published.Required fields are marked Comment Name Email Website [x] Save my name, email, and website in this browser for the next time I comment. DON’T MISS A THING BEST SELLING RESOURCES FIND ME ON TPT FIND IT FAST SEARCH BY CATEGORY Anchor Charts Back to School Favorites Boom Cards Class Decor & Bulletin Boards Classroom Routines Classroom Transformations End of the Year Figurative Language First Week of School Grammar Holidays Idioms Math Read Aloud Books Reading Activities Reading Skills Science Teacher Favorites Test Prep Thrifty Teacher Deals Uncategorized Word Work & Spelling Writing BOOM CARD RESOURCES Copyright ©2025 · BRANDING + WEBSITE DESIGN BY LAUGH EAT LEARN
188828
https://www.usu.edu/math/wang/cpam.pdf
On the Caffarelli-Kohn-Nirenberg Inequalities: Sharp Constants, Existence (and Nonexistence), and Symmetry of Extremal Functions FLORIN CATRINA AND ZHI-QIANG WANG Utah State University Dedicated to Professor L. Nirenberg on the occasion of his 75th birthday Abstract Consider the following inequalities due to Caffarelli, Kohn, and Nirenberg :    Z RN |x|−bp|u|p dx    2/p ≤Ca,b Z RN |x|−2a|∇u|2 dx where, for N ≥3, −∞< a < (N −2)/2, a ≤b ≤a + 1, and p = 2N/(N −2 + 2(b −a)). We shall answer some fundamental questions con-cerning these inequalities such as the best embedding constants, the existence and nonexistence of extremal functions, and their qualitative properties. While the case a ≥0 has been studied extensively and a complete solution is known, little has been known for the case a < 0. Our results for the case a < 0 re-veal some new phenomena which are in striking contrast with those for the case a ≥0. Results for N = 1 and N = 2 are also given. c ⃝2001 John Wiley & Sons, Inc. 1 Introduction In , among a much more general family of inequalities, Caffarelli, Kohn, and Nirenberg established the following inequalities: For all u ∈C∞ 0 (RN), (1.1)   Z RN |x|−bp|u|p dx   2/p ≤Ca,b Z RN |x|−2a|∇u|2 dx where, for N ≥3, (1.2) −∞< a < N −2 2 , a ≤b ≤a + 1 , and p = 2N N −2 + 2(b −a) . Communications on Pure and Applied Mathematics, Vol. LIV, 0229–0258 (2001) c ⃝2001 John Wiley & Sons, Inc. 230 F. CATRINA AND Z.-Q. WANG The cases N = 2 and N = 1 will be treated in a separate section. The conditions for these cases are, for N = 2, (1.3) −∞< a < 0 , a < b ≤a + 1 , and p = 2 b −a , and, for N = 1, (1.4) −∞< a < −1 2 , a + 1 2 < b ≤a + 1 , and p = 2 −1 + 2(b −a) . Let D1,2 a (RN) be the completion of C∞ 0 (RN) with respect to the inner product (1.5) (u, v) = Z RN |x|−2a∇u · ∇v dx. Then we see that (1.1) holds for u ∈D1,2 a (RN). We define (1.6) S(a, b) = inf u∈D1,2 a (RN ){0} Ea,b(u), to be the best embedding constants, where (1.7) Ea,b(u) = R RN |x|−2a|∇u|2 dx R RN |x|−bp|u|p dx 2/p . The extremal functions for S(a, b) are ground state solutions of the Euler equation (1.8) −div(|x|−2a∇u) = |x|−bpu p−1 , u ≥0 , in RN. This equation is regarded as a prototype of more general nonlinear degenerate el-liptic equations from physical phenomena (e.g., [2, 12] and references therein). Note that the Caffarelli-Kohn-Nirenberg inequalities (1.1) (see also general-izations in by Lin) contain the classical Sobolev inequality (a = b = 0) and the Hardy inequality (a = 0, b = 1) as special cases, which have played important roles in many applications by virtue of the complete knowledge about the best constants, extremal functions, and their qualitative properties (see e.g., [6, 13, 15, 18] and references therein). Thus it is a fundamental task to study the best constants, existence (and nonexistence) of extremal functions, as well as their qualitative properties in inequality (1.1) for parameters a and b in the full parameter domain (1.2). Much progress has been made for the parameter region 0 ≤a < N −2 2 , a ≤b ≤a + 1 , (to which we shall refer as the “a-nonnegative region”). In [1, 23], the best constant and the minimizers for the Sobolev inequality (a = b = 0) were given by Aubin and Talenti. In , Lieb considered the case a = 0, 0 < b < 1, and gave the best constants and explicit minimizers. In , Chou and Chu considered the full a-nonnegative region and gave the best constants and explicit minimizers. Also for this a-nonnegative region, Lions in (for a = 0) and Wang and Willem CAFFARELLI-KOHN-NIRENBERG INEQUALITIES 231 (for a > 0) in established the compactness of all minimizing sequences up to dilations provided a ≤b < a + 1. The symmetry of the minimizers has also been studied in and . In fact, all nonnegative solutions in D1,2 a (RN) for the corresponding Euler equation (1.8) are radially symmetric (in the case a = b = 0, they are radial with respect to some point) and explicitly given (see [1, 11, 18, 23]). This was established in , where a generalization of the moving plane method was used (e.g., [5, 10, 14]). On the other hand, it seems that little is known for parameters in the a-negative region −∞< a < 0 , a ≤b ≤a + 1 . This also applies to N = 1 and N = 2, with b in the corresponding intervals (1.4) and (1.3). The case −1 < a < 0 and b = 0 was treated recently by Caldiroli and Musina in , who gave the existence of ground states. The goal of this paper is to settle some of the fundamental questions concerning inequalities (1.1) with parameters in the a-negative region, such as the best constants, the existence and nonexistence of minimizers, and the symmetry properties of minimizers. For the a-negative region we shall reveal new phenomena that are strikingly different from those for the a-nonnegative region. To state the results, let Sp(RN) be the best embedding constant from H 1(RN) into L p(RN), i.e., Sp(RN) = inf u∈H1(RN ){0} R RN |∇u|2 + u2 dx R RN |u|p dx  2 p . In the theorems stated below, we assume N ≥3. Results for N = 1 and N = 2 will be given in Section 7. THEOREM 1.1 (Best Constants and Nonexistence of Extremal Functions) (i) S(a, b) is continuous in the full parameter domain (1.2). (ii) For b = a + 1, we have S(a, a + 1) = N−2−2a 2 2, and S(a, a + 1) is not achieved. (iii) For a < 0 and b = a, we have S(a, a) = S(0, 0) (the best Sobolev constant), and S(a, a) is not achieved. THEOREM 1.2 (Best Constants and Existence of Extremal Functions) (i) For a < b < a + 1, S(a, b) is always achieved. (ii) For b −a ∈(0, 1) fixed, as a →−∞, S(a, b) is strictly increasing, and S(a, b) =  N −2 −2a 2 2(b−a) [Sp(RN) + o(1)] . THEOREM 1.3 (Symmetry Breaking) (i) There is a0 ≤0 and a function h(a) defined for a ≤a0, satisfying h(a0) = a0, a < h(a) < a + 1 for a < a0, and a + 1 −h(a) →0 as −a →∞, such that for any (a, b) satisfying a < a0 and a < b < h(a), the minimizer for S(a, b) is nonradial. 232 F. CATRINA AND Z.-Q. WANG (ii) There is an open subset H inside the a-negative region containing the set {(a, a) ∈R2 : a < 0} such that for any (a, b) ∈H with a < b, the minimizer for S(a, b) is nonradial. Though the minimizers may be nonradial, we still have the following: THEOREM 1.4 (Symmetry Property) For a ≤b < a + 1, any bound state solution u of (1.8) in D1,2 a satisfying u(x) > 0 for x ∈RN \ {0}, possibly after a dilation u(x) →τ (N−2−2a)/2u(τx), satisfies the “modified inversion” symmetry: u  x |x|2  = |x|N−2−2au(x) . Moreover, writing |x| = e−t and θ = x/|x|, we have that for fixed θ, e−N−2−2a 2 tu(e−tθ) is even in t and monotonically decreasing in t for t > 0. REMARK 1.5 Some comments are in order here. 1. In Theorems 1.1 and 1.2, we have given the best constants for (a, b) on the “boundary” of the a-negative region. Since S(a, b) is continuous, we also ob-tain estimates for S(a, b) near the boundary of the parameter domain. From Theorems 1.2 and 1.3, there are no closed-form minimizers, so it seems to be very difficult to examine the best constants in the interior of the region. 2. For a special case b = 0, −1 < a < 0, the existence of a minimizer was given in by using a quite different method. 3. In the case b = a, we have p = 2∗, the critical Sobolev exponent. The situation is quite delicate since for a ≥0, S(a, a) is strictly decreasing in a and is solvable as we mentioned above [11, 25], and for a < 0, we have S(a, a) = S(0, 0) and the nonexistence result in Theorem 1.1. 4. The results in (i) and (ii) of Theorem 1.3 overlap, but neither implies the other. The importance of (ii) is that symmetry breaking occurs for all a < 0 if b is sufficiently close to a. 5. For Theorem 1.3(i), a0 and h(a) will be given explicitly in the proof in Sec-tion 6. Our approach to the problem in this paper is quite different from that used in the quoted previous papers (see [1, 7, 11, 18, 22, 23, 25]) in which the problem was worked directly in D1,2 a (RN), and we shall take a detour to convert the problem to an equivalent one defined on H 1(R × SN−1). While taking advantage of the two formulations, we shall work mainly with the equivalent one on H 1(R × SN−1). The reformulation enables us to make use of a combination of analytical tools such as a compactness argument, rescaling, the concentration compactness principle, bifurcation analysis, the moving plane method, etc. Moreover, our approach also gives a different proof of inequalities (1.1) (see Remark 2.4). The organization of the paper is as follows. In Section 2, we shall introduce a transformation that transforms our problem in RN to one on the space R × SN−1 on which we have a family of inequalities corresponding to (1.1) and an Euler CAFFARELLI-KOHN-NIRENBERG INEQUALITIES 233 equation corresponding to (1.8). The two problems will be shown to be equivalent, and we shall mainly work on the transformed one on R × SN−1. The advantage in working on the latter is that the equation is an autonomous one and is defined in H 1(R × SN−1). Radial solutions (as we shall see, the only bound state radial solutions are the ground state solutions in the radially symmetric class) will be examined completely and their energy levels will be computed so that some com-parison arguments can be done later. In Section 3, we prove Theorem 1.1, first establishing the continuity of S(a, b) in (a, b) and then giving the nonexistence result for the case b = a with a combination of continuity and comparison argu-ments. In Section 4, the existence of a minimizer for the case a < b < a + 1 will be given by using a compactness argument; an asymptotic estimate for S(a, b) as a →−∞will be given using a concentration compactness principle. In Section 5, we establish the symmetry-breaking result (Theorem 1.3). First a bifurcation analysis will be done to claim the symmetry breaking for a away from 0. For a close to 0 it is much subtler, and some continuity and comparison arguments will be employed. Section 6 is devoted to establishing the modified inversion symmetry (up to a dilation) for all bound state solutions of (1.8) by using the moving plane method. In Section 7, we treat the cases N = 1 and N = 2. For N = 1 we have a complete solution for the problem including the identification of all bound state solutions. Finally, in Section 8, we state results for a related problem that can be solved using our results for (1.8), and we also point out some related open questions in Section 9. 2 An Equivalent Problem and Some Preliminaries In this section, we start by introducing a family of transformations that will transform our original problem to one defined on a cylinder R × SN−1. The two problems will be shown to be equivalent in a sense that will be precisely specified. Then some preliminary results on the radial solutions will be given. 2.1 Equivalent Problems on R × SN−1 To problem (1.1) and equation (1.8) on RN we shall derive an equivalent mini-mization problem and corresponding Euler equation on R×SN−1. We shall use the notation C = R × SN−1. While working on both problems to take advantage of the two formulations, we shall get most of our results on the cylinder C. For integrals over a domain included in C, by dµ we denote the volume element on C. Also, by |∇u|2 we understand gi juiuj and (gi j) are the components of the inverse matrix to the metric induced from RN+1. For points on C we use either the notation y to identify a point in RN+1 or (t, θ) to identify a point in R × SN−1. To u, a smooth function with compact support in RN \ {0}, we associate v, a smooth function with compact support on C, by the transformation (2.1) u(x) = |x|−N−2−2a 2 v  −ln |x|, x |x|  . 234 F. CATRINA AND Z.-Q. WANG Here for x ∈RN \ {0}, with t = −ln |x| and θ = x/|x|, we have (t, θ) ∈C. Let us denote by L p b(RN) = {u : R RN |x|−bp|u|pdx < ∞} the weighted L p space. We need the following lemma. LEMMA 2.1 For a < N−2 2 , a ≤b ≤a + 1, and p = 2N N−2+2(b−a), it holds that D1,2 a (RN) = C∞ 0 (RN \ {0}) ∥·∥, where ∥·∥is the norm in D1,2 a (RN) given by (1.5). Moreover, L p b(RN) is also given by the completion of C∞ 0 (RN \ {0}) under its norm. PROOF: By the definition of D1,2 a (RN), it suffices to show C∞ 0 (RN) ⊂C∞ 0 (RN \ {0}) ∥·∥. Let ρ(t) be a cutoff function that is 1 for t ≥2 and 0 for 0 < t ≤1. For a fixed u ∈C∞ 0 (RN), we define uε(x) = ρ(|x|/ε)u(x) ∈C∞ 0 (RN \ {0}). Then it is easy to check that ∥uε −u∥→0 as ε →0. The second part is similar. Now for u ∈C∞ 0 (RN \ {0}), by a direct computation we have Z RN |x|−2a|∇u|2(x) dx = Z RN |x|−N |∇θv|2 +  vt + N −2 −2a 2 v 2! dx ; therefore Z RN |x|−2a|∇u|2(x) dx = Z C |∇θv|2 +  vt + N −2 −2a 2 v 2 dµ = Z C |∇θv|2 + v2 t +  N −2 −2a 2 2 v2 dµ . Also, Z RN |x|−bpu p(x) dx = Z RN |x|−Nv p dx = Z C v p dµ. From these and Lemma 2.1, we immediately have the following: PROPOSITION 2.2 The mapping given in (2.1) is a Hilbert space isomorphism from D1,2 a (RN) to H 1(C) where the inner product on H 1(C) is (v, w) = Z C ∇v · ∇w +  N −2 −2a 2 2 vw dµ . Now we define an energy functional on H 1(C): (2.2) Fa,b(v) = R C |∇θv|2 + v2 t + N−2−2a 2 2v2 dµ R C |v|p dµ 2/p . CAFFARELLI-KOHN-NIRENBERG INEQUALITIES 235 If u ∈D1,2 a (RN) and v ∈H 1(C) are related through (2.1), then Ea,b(u) = Fa,b(v) . Moreover, if u is a solution of (1.8), then v satisfies (2.3) −vtt −1θv +  N −2 −2a 2 2 v = v p−1 , v > 0 , on C where t = −ln|x| and 1θ is the Laplace operator on the (N −1)-sphere. We collect these observations in the following: PROPOSITION 2.3 With a, b, and p satisfying (1.2), we have (i) If u ∈D1,2 a and v ∈H 1(C) are related through (2.1), then Ea,b(u) = Fa,b(v). (ii) For S(a, b) defined in (1.6), it holds S(a, b) = infH1(C){0} Fa,b(v). (iii) Solutions of (1.8) and (2.3) are in one-to-one correspondence, being related through (2.1). REMARK 2.4 Our approach here gives a new independent proof of the Caffarelli-Kohn-Nirenberg inequalities for the considered parameters, because by the clas-sical Sobolev embeddings from H 1(C) into L p(C), Fa,b(v) has a positive lower bound on H 1(C) and the transformation (2.1) gives the desired inequalities on D1,2 a (RN). REMARK 2.5 As motivation, we mention that transformations of similar nature to (2.1) have been used in the past to study radial solutions (e.g., ), which link two ODEs. For PDEs, this was used recently for the Yamabe problem (a = b = 0) in . In this paper we have developed the full-blown version of the transformations to deal with solutions of PDEs (1.8), and furthermore we have established the equivalence of the function spaces involved. 2.2 Invariance of the Problem (1.8) In order to study the symmetry property of solutions, we examine the invariance of the problem under the transformation (2.1). As in the case of the Yamabe prob-lem (a = b = 0), the group of transformations that leaves problem (1.8) invariant is noncompact. The group of translations in RN is a symmetry group for (1.8) only in the case a = b = 0. On the other hand, the dilations (2.4) uτ(x) = τ N−2−2a 2 u(τx) , τ > 0 , leave the problem invariant for all a and b; i.e., if u is a solution of (1.8), so is uτ. This still holds for N = 2 and N = 1, but for N = 1 the situation is a bit different and there is a two-parameter family of dilations (see (7.3)). The group that leaves (2.3) invariant, corresponding to dilations in RN, is the group of translations in the t-direction. If v and vτ in H 1(C) are related to u and uτ in D1,2 a (RN) through (2.1), then vτ(t, θ) = v(t −ln τ, θ) . 236 F. CATRINA AND Z.-Q. WANG Finally, the following modified inversion invariance of (1.8), (2.5) ¯ u(x) = |x|−(N−2−2a)u  x |x|2  , translates on the cylinder to the following obvious symmetry of (2.3), ¯ v(t, θ) = v(−t, θ) . 2.3 Radial Solutions Let D1,2 a,R(RN) be the subspace of D1,2 a (RN) consisting of radial functions. De-fine (2.6) R(a, b) = inf u∈D1,2 a,R(RN ){0} Ea,b(u) . By Proposition 2.3(i) we also have R(a, b) = inf u∈H1 R(C){0} Fa,b(u) , where H 1 R(C) consists of functions independent of θ. We shall find the exact value of R(a, b) and the exact form of the radial solutions that achieve these constants when a ≤b < a+1. We remark here that our method applies for the a-nonnegative region also and in fact gives a new approach for the a-nonnegative region; the results we get agree with and in this region. In order to study the radial solutions of (1.8), we shall need the exact form of particular positive solutions for the following nonlinear second-order ODE: (2.7) −vtt + λ2v = v p−1 , v > 0 , in R with p > 2. The problem can be associated to the Hamiltonian system d dt v = w , d dt w = λ2v −v p−1 . We have the Hamiltonian H(v, w) = 1 2w2 −λ2 2 v2 + 1 pv p . All solutions correspond to level curves of H(v, w). Up to translations, there is only one homoclinic solution v that is on the level H(v, w) = 0. The levels below this one will give v positive, periodic, and bounded away from zero. For the levels above, v changes sign so we lose positivity. The only positive solutions that are in H 1(R) are translates of (2.8) v(t) = λ2 p 2  1 p−2  cosh  p −2 2 λt − 2 p−2 . CAFFARELLI-KOHN-NIRENBERG INEQUALITIES 237 A direct calculation gives that for the v above, (2.9) R R v2 t + λ2v2 dt R R v p dt 2/p = 2p λ(p+2)/p (p −2)(p−2)/p 02p p−2  0 2p p−2  ! p−2 p . Now, when searching for radial solutions, equation (2.3) becomes (2.10) vtt −  N −2 −2a 2 2 v + v p−1 = 0 , v > 0 , on R , which corresponds to equation (2.7) with λ = N−2−2a 2 . According to (2.8), the homoclinic solutions of (2.10) are translates of v(t) =  N(N −2 −2a)2 4(N −2(1 + a −b))  N−2(1+a−b) 4(1+a−b)  cosh (N −2 −2a)(1 + a −b) N −2(1 + a −b) t −N−2(1+a−b) 2(1+a−b) . (2.11) The radial solution in RN for (1.8) corresponding to this v is (2.12) u(x) =  N(N −2 −2a)2 N −2(1 + a −b)  N−2(1+a−b) 4(1+a−b) 1  1 + |x| 2(N−2−2a)(1+a−b) N−2(1+a−b)  N−2(1+a−b) 2(1+a−b) . All radial solutions in RN for (1.8) are dilations of this u. Note that by substituting in (2.9) λ = N −2 −2a 2 and p = 2N N −2(1 + a −b), we estimate the energy of any radial solution in H 1(C), R(a, b) = Ea,b(u) = Fa,b(v) , R(a, b) = Nω 2(1+a−b) N N−1 (N −2 −2a) 2(N−(1+a−b)) N 2 2(1+a−b) N (N −2(1 + a −b)) N−2(1+a−b) N (1 + a −b) 2(1+a−b) N   02  N 2(1+a−b)  0 N 1+a−b    2(1+a−b) N . (2.13) PROPOSITION 2.6 Up to a dilation (2.4), all radial solutions of (1.8) are explicitly given in (2.12), and R(a, b) is given in (2.13). REMARK 2.7 In the case a > 0, this is the best constant as found in , i.e., R(a, b) = S(a, b). Also, for a = 0 and 0 ≤b < 1, it is the best constant found by Lieb in . In the case a ≥0, up to a dilation (and also a translation in the case a = b = 0), (2.12) gives all bound state solutions of (1.8) that achieve equality in the Caffarelli-Kohn-Nirenberg inequality (see [11, 18]). 238 F. CATRINA AND Z.-Q. WANG 3 Best Constants and Nonexistence of Extremal Functions To prove Theorem 1.1(i), we need a couple of lemmas. LEMMA 3.1 Let a0 < N−2 2 , a0 ≤b0 ≤a0 + 1. Then lim sup (a,b)→(a0,b0) S(a, b) ≤S(a0, b0) . PROOF: For any ε > 0, there is a nonnegative function v ∈C∞ 0 (C) such that Fa0,b0(v) ≤S(a0, b0) + ε 2 . Note that as (a, b) →(a0, b0), v p(x) →v p0(x) for all x. For any p ∈[2, 2∗], v p(x) ≤w(x) where w(x) = v2(x) if v(x) < 1 and w(x) = v2∗(x) if v(x) ≥1. Clearly w is integrable; therefore by the dominated convergence theorem we have lim (a,b)→(a0,b0) Z C v p dµ = Z C v p0 dµ . From this, and because λ is continuous in a, we get there is δ > 0 such that |(a, b) −(a0, b0)| < δ implies S(a, b) ≤Fa,b(v) ≤Fa0,b0(v) + ε 2 ≤S(a0, b0) + ε . Let ε →0. LEMMA 3.2 Let (pn) ⊂[2, 2∗] be a sequence convergent to p. If a sequence (un) is uniformly bounded by M in H 1(C), then (i) if p ∈(2, 2∗), we have lim n→∞ Z C |un|pn −|un|p dµ = 0 ; (ii) if p = 2 or p = 2∗, we have lim sup n→∞ Z C (|un|pn −|un|p) dµ ≤0 . PROOF: We first prove (i). By the mean value theorem, there are functions ξn defined on C with values between pn and p such that Z C |un|pn −|un|p dµ = Z C ln |un||un|ξn(x)(pn −p) dµ . CAFFARELLI-KOHN-NIRENBERG INEQUALITIES 239 Since p ∈(2, 2∗), let ε > 0 such that [p −ε, p + ε] ⊂(2, 2∗). Let nε be such that for n ≥nε we have |pn −p| < ε; therefore Z C |un|pn −|un|p dµ ≤ |pn −p|    Z |un|>1 ln |un||un|p+ε dµ + Z 0<|un|<1 ln 1 |un||un|p−ε dµ   . The key now is to show that the two integrals on the right-hand side are bounded as n →∞. There is a constant C depending only on p such that ln u ≤Cu2∗−p−ε for all u > 1 and ln 1 u ≤ C u p−ε−2 for all 0 < u < 1 . With Sp(C) = inf u∈H1(C){0} R C |∇u|2 + u2 dµ R C |u|p dµ 2/p , we obtain Z |un|>1 ln |un||un|p+ε dµ ≤C Z |un|>1 |un|2∗dµ ≤C  M S2∗(C)  2∗ 2 . We also have that Z 0<|un|<1 ln 1 |un||un|p−ε dµ ≤C Z 0<|un|<1 |un|2 dµ ≤C M S2(C) . This concludes the proof of (i). For part (ii), we use the same method as above after we make the estimates as follows: For p = 2, Z C |un|pn −|un|2 dµ ≤ Z |un|>1 |un|pn −|un|2 dµ , and for p = 2∗,Z C |un|pn −|un|2∗dµ ≤ Z 0<|un|<1 |un|pn −|un|2∗dµ . REMARK 3.3 In the cases p = 2 or p = 2∗, one can construct sequences (un) bounded in H 1(C) such that |un|L p = 1 for all n, while |un|L pn →0 as pn →p. Thus Lemma 3.2(ii) is sharp. 240 F. CATRINA AND Z.-Q. WANG PROOF OF THEOREM 1.1(i): According to Lemma 3.1, it suffices to show that lim inf (a,b)→(a0,b0) S(a, b) ≥S(a0, b0) . Assume there is a sequence (an, bn) →(a0, b0) such that (3.1) lim n→∞S(an, bn) < S(a0, b0) . Then there are ε > 0 and functions (vn) ⊂H 1(C) such that Z C |vn|pn dµ = 1 and S(a0, b0) −ε ≥Fan,bn(vn) . Clearly, (vn) is bounded in H 1(C). From Lemma 3.2, we get Fan,bn(vn) + o(1) ≥Fa0,b0(vn) ≥S(a0, b0) . This and (3.1) give the desired contradiction. REMARK 3.4 A similar proof shows that R(a, b) is continuous in (a, b) in the full parameter region, including the upper boundary {b = a + 1} for which no radial solutions exist. PROOF OF THEOREM 1.1(ii): Clearly, Fa,a+1(v) ≥((N −2 −2a)/2)2 for all v ∈H 1(C). On the other hand, one can easily construct a sequence (vn) ⊂H 1(C) of radial functions such that Fa,a+1(vn) →((N −2 −2a)/2)2. Therefore, S(a, a + 1) =  N −2 −2a 2 2 . For nonexistence of minimizers, one notes that for λ ≥1, the equation −1v + λ2v = v has no nonzero solution in H 1(C). For 0 < λ < 1, i.e., (N −4)/2 < a < (N −2)/2, assume that S(a, a + 1) is achieved by some function v ∈H 1(C). By the maximum principle, v > 0 everywhere. Denote by f (t) the average of v on the spheres t = const. Then f is a positive function in H 1(R) and satisfies the ODE −ftt + λ2 f = f . The only nonnegative solution is f ≡0. Therefore for all a < N−2 2 , the infimum S(a, a + 1) is not achieved. PROOF OF THEOREM 1.1(iii): The case a = b = 0 is well known (the Yam-abe problem in RN). In this case, the minimizer S(0, 0) is achieved only by func-tions Uµ,y(x) = C µ(N−2)/2 µ2 + |x −y|2(N−2)/2 , µ > 0 , y ∈RN . CAFFARELLI-KOHN-NIRENBERG INEQUALITIES 241 Note that for a ∈(−N/2, (N −2)/2), Uµ,y ∈D1,2 a . For y ̸= 0 by a direct computation we get for a ∈(−N/2, (N −2)/2) S(0, 0) = lim µ→0 Ea,a(Uµ,y) . Due to this fact one concludes that for a ∈(−N/2, (N −2)/2), (3.2) S(a, a) ≤S(0, 0) . On the other hand, by the expression (2.2), for any v ∈H 1(C) \ {0}, Fa,a(v) > F0,0(v) ≥S(0, 0). Hence, S(a, a) = S(0, 0) for all a ∈(−N/2, 0). Next, we fix a1 ∈(−N/2, 0). For any a ≤−N/2 fixed and any ε > 0, there is v ∈H 1(C) such that Fa1,a1(v) ≤S(0, 0) + ε 2(λ(a)2 −λ(a1)2)(λ(a1)2 −λ(0)2) , where λ(a) = (N −2 −2a)/2. Together with S(0, 0) ≤F0,0(v) ≤Fa1,a1(v), we conclude R C v2dµ R C |v|2∗dµ 2/2∗≤ ε 2(λ(a)2 −λ(a1)2) . Then Fa,a(v) = Fa1,a1(v) + (λ(a)2 −λ(a1)2) R C v2dµ R C |v|2∗dµ 2/2∗≤S(0, 0) + ε . That is, S(a, a) = S(0, 0) for all a ≤0. Next we show S(a, a) is not achieved for a < 0. If the conclusion is not true, for some a < 0 and v ∈H 1(C) we get S(a, a) = Fa,a(v). But using Fa,a(v) > F0,0(v) ≥S(0, 0), we get a contradiction to S(a, a) = S(0, 0). 4 Best Constants and Existence of Extremal Functions In this section we prove the existence of a minimizer for a < 0 and a < b < a + 1. We also give an asymptotic estimate of S(a, b) as −a →∞, while b −a ∈ (0, 1) is a fixed constant. We shall need the following lemma. It is analogous to a result on RN due to P. L. Lions . The proof is similar to the proof of lemma 1.21 in . We omit the proof here. LEMMA 4.1 Let r > 0 and 2 ≤q < 2∗. If (wn) is bounded in H 1(C) and if sup y∈C Z Br(y)∩C |wn|q dµ →0 as n →∞, then wn →0 in L p(C) for 2 < p < 2∗. Here Br(y) denotes the ball in RN+1 with radius r centered at y. 242 F. CATRINA AND Z.-Q. WANG PROOF OF THEOREM 1.2(i): Let a < 0 and a < b < a + 1 be fixed. Consider a minimizing sequence (wn) ⊂H 1(C) such that Z C |wn|p dµ = 1 for all n ≥1 and Z C |∇wn|2 +  N −2 −a 2 2 w2 n dµ →S(a, b) as n →∞. According to Lemma 4.1, δ = lim inf n→∞   sup y∈C Z Br(y)∩C w2 n dµ   > 0 . Eventually by passing to a subsequence, we may assume there are (yn) ⊂C and y0 ∈C fixed such that the sequence vn(x) = wn(x −yn) has the property Z Br(y0)∩C |vn|2 dµ > δ 2 . Clearly, Z C |vn|p dµ = 1 for all n ≥1 and Z C |∇vn|2 +  N −2 −a 2 2 v2 n dµ →S(a, b) as n →∞. Without loss of generality we can assume vn ⇀v weakly in H 1(C) , vn →v in L2 loc(C) , vn →v almost everywhere in C . According to the Brezis-Lieb lemma , we have 1 = |v|p L p + lim n→∞|vn −v|p L p . CAFFARELLI-KOHN-NIRENBERG INEQUALITIES 243 Hence S(a, b) = lim n→∞ Z C |∇vn|2 +  N −2 −a 2 2 v2 n dµ = Z C |∇v|2 +  N −2 −a 2 2 v2 dµ + lim n→∞ Z C |∇vn −v|2 +  N −2 −a 2 2 (vn −v)2 dµ ≥S(a, b)  |v|2 L p + (1 −|v|p L p) 2 p  . Since v ̸≡0, we obtain |v|L p = 1, and so Z C |∇v|2 +  N −2 −a 2 2 v2 dµ = S(a, b) . Let b −a ∈(0, 1) be fixed so that p ∈(2, 2∗) is also fixed. We shall consider the asymptotic behavior of S(a, b) as −a →∞. PROOF OF THEOREM 1.2(ii): We use a rescaling argument. Let hλ : RN+1 → RN+1 be the scaling map hλ(x) = λx. Denote Cλ = hλ(C) and for v ∈H 1(C), define u ∈H 1(Cλ) by u(λx) = v(x). For definiteness, on H 1(Cλ) we use the norm ∥u∥2 = R Cλ |∇u|2 + |u|2 dµ. We have Z C |∇v|2 + λ2v2 dµ = λ2−N Z Cλ |∇u|2 + u2 dµ and Z C |v|p dµ = λ−N Z Cλ |u|p dµ . Therefore, Fa,b(v) = λ2(b−a) R Cλ |∇u|2 + u2 dµ R Cλ |u|p dµ  2 p . Now it suffices to show that I (λ) := inf u∈H1(Cλ){0} R Cλ |∇u|2 + u2 dµ R Cλ |u|p dµ 2/p →Sp(RN) as λ →∞. First we have that (4.1) lim sup λ→∞ I (λ) ≤Sp(RN) . 244 F. CATRINA AND Z.-Q. WANG We get this through a cutoff procedure. Let r > 0; then for fixed λ large and y ∈Cλ, we have a projection ψ = ψy,r,λ from Br(0) ⊂RN to ψ(Br(0)) ⊂Cλ defined as follows: Identify RN with the tangent space to Cλ at y ∈Cλ, and let ψ to be the projection from Br(0) into Cλ along directions parallel to νy, the exterior normal to Cλ at y. Then ψ is a diffeomorphism on its image and for fixed r, the Jacobian matrix of ψ tends uniformly to the identity matrix as λ →∞. Denote by w ∈H 1(RN) a function with support in Br(0) ⊂RN. For y ∈Cλ, let uλ(ψy,r,λ(x)) = w(x) and 0 outside ψy,r,λ(Br(0)); then (4.2) Z Cλ |∇uλ|2 + u2 λ dµ = Z RN |∇w|2 + w2 dx + o(1) and (4.3) Z Cλ |uλ|p dµ = Z RN |w|pdx + o(1) where o(1) →0 as λ →∞uniformly in y. In RN, it is known that the infimum Sp(RN) is achieved by a positive function w, radially symmetric about some point, which satisfies −1w + w = w p−1 in RN. To prove (4.1), let ε > 0 and let r > r0 > 0, sufficiently large, so that for a cutoff function ρ(x), which is identically 1 in Br0(0) and 0 outside Br(0), we have R RN |∇(ρw)|2 + (ρw)2 dx R RN (ρw)p dx 2/p < Sp(RN) + ε 2 . Then from (4.2) and (4.3), there is λ large enough such that when we consider u(x) = (ρw)(ψ−1(x)) ∈H 1(Cλ) , we get R Cλ |∇u|2 + u2 dµ R Cλ u p dµ 2/p < R RN |∇(ρw)|2 + (ρw)2 dx R RN (ρw)p dx 2/p + ε 2 . From the two inequalities we conclude that I (λ) ≤ R Cλ |∇u|2 + u2 dµ R Cλ u p dµ 2/p < Sp(RN) + ε . Therefore, lim sup λ→∞ I (λ) ≤Sp(RN) + ε . We now prove (4.4) lim inf λ→∞I (λ) ≥Sp(RN) . CAFFARELLI-KOHN-NIRENBERG INEQUALITIES 245 If (4.4) does not hold, there are ε0 > 0 and a sequence (λn) which tends to ∞such that I0 := lim n→∞I (λn) ≤Sp(RN) −ε0 . Then there are functions un ∈H 1(Cn) (here Cn = Cλn) such that Z Cn |un|p dµ = 1 and I (λn) ≤ Z Cn |∇un|2 + u2 n dµ ≤Sp(RN) −ε0 . Now we need a more detailed concentration-compactness lemma than the one in and along the lines of lemmas 4.1 and 4.2 in . The result in is for the H 1(RN) setting, but the proof carries over to our situation, too. We omit it here. For r > 0 and yn,i ∈Cn, let n,i(r) be ψyn,i,r,λn(Br(0)). LEMMA 4.2 Let λn →∞and un ∈H 1(Cn) be uniformly bounded (with norm given by ∥u∥2 = R Cn |∇u|2 + |un|2 dµ). Assume R Cn |un|p dµ = 1. Then there is a subsequence (still denoted by (un)), a nonnegative, nonincreasing sequence (αi) satisfying P∞ i=1 αi = 1, and sequences (yn,i)i ⊂Cn associated with each αi > 0 satisfying (4.5) lim inf n→∞|yn,i −yn, j| = ∞ for any i ̸= j such that the following property holds: If αs > 0 for some s ≥1, then for any ε > 0 there exist R > 0, for all r ≥R and all r′ ≥R, such that (4.6) lim sup n→∞ s X i=1 αi − Z n,i(r) |un|p dµ + 1 − s X i=1 αi ! − Z Cn\Ss i=1 n,i(r′) |un|p+1 dµ < ε . In Lemma 4.2, fix s > 0 with αs > 0 such that (4.7) s X i=1 αi >  I0 Sp(RN)  p 2 . For αs > ε > 0, let R > 0 and (yn,i)i ⊂Cn such that for all r,r′ > R, we have (4.8) lim n→∞ s X i=1 αi − Z n,i(r) |un|p dµ + 1 − s X i=1 αi ! − Z Cn\Ss i=1 n,i(r′) |un|p dµ < ε . 246 F. CATRINA AND Z.-Q. WANG We now consider a cutoff function ρ on RN that is identically 1 inside BR(0) and 0 outside B2R(0) and |∇ρ| ≤2 R at any point. For 1 ≤i ≤s, define ψ = ψyn,i,2R,λn as before, and let wn,i(x) = ρ(x)un(ψ(x)) designate functions with compact support in RN. By a direct computation, we get Z RN |∇wn,i|2 + w2 n,i dx ≤ Z n,i(2R) |∇un|2 + u2 n dµ + o(1) + C R with C independent of n, ε, and R, and o(1) →0 as n →∞. Also, Z RN |wn,i|p dx ≥ Z n,i(R) |un|p dµ + o(1) . Since Z RN |∇wn,i|2 + w2 n,i dx ≥   Z RN |wn,i|p dx   2 p Sp(RN) , we obtain Z n,i(2R) |∇un|2 + u2 n dµ + o(1) + C R ≥    Z n,i(R) |un|p dµ + o(1)    2 p Sp(RN) . Therefore, Z C |∇un|2 + u2 n dµ ≥    s X i=1 Z n,i(R) |un|p dµ    2 p Sp(RN) + o(1) −sC R . From (4.8) we get Z C |∇un|2 + u2 n dµ ≥ s X i=1 αi −ε ! 2 p Sp(RN) + o(1) −sC R . Letting n →∞and then R →∞, we obtain I0 ≥ s X i=1 αi −ε ! 2 p Sp(RN) . Now, let ε →0 to get I0 ≥ s X i=1 αi ! 2 p Sp(RN) , CAFFARELLI-KOHN-NIRENBERG INEQUALITIES 247 which contradicts (4.7). 5 Symmetry Breaking For symmetry breaking, we have Theorem 1.3(i) and (ii). The results of (i) and (ii) will be proved using different ideas. For Theorem 1.3(i), the idea is to use bifurcation techniques and to show that for certain (a, b), by perturbing the ra-dial solution va given in (2.11), there are directions in which the energy decreases. Since S(a, b) is achieved, the minimizer cannot be radial. This approach has been used for other problems, for example, for bifurcation of positive solutions on an-nular domains in . On the other hand, for Theorem 1.3(ii), we shall employ an idea in by Brezis and Nirenberg (in which they studied a problem with a nearly critical exponent on annular domains) to compare the radial least energy and S(a, b). A continuity argument then gives the conclusion. We first give the proof of Theorem 1.3(i). We work in H 1(C) here. The lin-earization of (2.3) at the radial solution va decomposes by separation of variables into infinitely many ODEs. Denote by αk = k(N −2 + k) the kth eigenvalue of −1θ on SN−1. For k ≥0, we denote by µk and fk the first eigenvalue and the corresponding (positive) eigenfunction in the eigenvalue problem of µ, −f tt + λ2 f + αk f −(p −1)v p−2 a f = µf . This eigenvalue problem is well defined since va(t) →0 as |t| →∞. First, we show that there are a0 and a function a < h(a) < a + 1 defined for a < a0 such that a < a0 and a < b < h(a) imply µ1 < 0. Indeed, µk = inf f ∈H1(R){0} R R f 2 t + λ2 f 2 + αk f 2 −(p −1)v p−2 a f 2 dt R R f 2 dt . We use va as a test function, and since Z C v2 a,t + λ2v2 a dµ = Z C v p a dµ , we obtain (5.1) µk ≤−(p −2) R C v p a dµ R C v2 a dµ + αk . Since α0 = 0, clearly we have µ0 < 0. We also have α1 = N −1, and by a direct calculation using (2.11), (5.1) gives (5.2) µ1 ≤− N(1 + a −b)(N −2 −2a)2 (N −2(1 + a −b))(N −(1 + a −b)) + N −1 . Note that the right-hand side in (5.2) is negative for (5.3) a < a0 := N −2 2 −N −1 2 r N −2 N 248 F. CATRINA AND Z.-Q. WANG and (5.4) a ≤b < h(a) := 1 + a − 2N l(a) + p l2(a) −8 , where l(a) = (N −2 −2a)2 N −1 + 3 . Hence µ1 is negative for a and b in this range. Note also that a + 1 −h(a) →0 as a →−∞. The a0 and h(a) above will be shown to have the property stated in Theorem 1.3(i). Define wk = φk(θ) fk, where φk is an eigenfunction of −1θ on SN−1 with eigenvalue αk. (φ0(θ) is just a positive constant and φ1(θ) is a first harmonic.) We get (5.5) −1wk + λ2wk −(p −1)v p−2 a wk = µkwk . We now have the following: LEMMA 5.1 For s small, there is δ = δ(s) such that δ(0) = δ′(0) = 0 and Z C |va + δ(s)w0 + sw1|p dµ = 1 . If, in addition, (a, b) is such that µ1 < 0 (which holds for a < a0 and a ≤b < h(a)), then for s sufficiently small, (5.6) F(va + δ(s)w0 + sw1) < F(va) . PROOF OF THEOREM 1.3(i): By the above lemma, for s small |va + δ(s)w0 + sw1|L p = 1. Then (5.6) shows S(a, b) < R(a, b). Since S(a, b) is achieved, the minimizer is nonradial. PROOF OF LEMMA 5.1: Set G(δ, s) = Z C |va + δw0 + sw1|p dµ . We have G(0, 0) = 1 and ∂G ∂δ (0, 0) = p R C v p−1 a w0 dµ > 0, since w0 > 0. By the implicit function theorem, there is an open s-interval around 0 where δ = δ(s) is differentiable and (5.7) G(δ(s), s) = 1 . Furthermore, by a direct computation and φ1(−θ) = −φ1(θ), we have ∂G ∂s (0, 0) = p Z C v p−1 a w1 dµ = p Z C v p−1 a φ1(θ) f1 dµ = 0. Differentiating (5.7) we get (5.8) ∂G ∂δ (δ(s), s)δ′(s) + ∂G ∂s (δ(s), s) = 0 . CAFFARELLI-KOHN-NIRENBERG INEQUALITIES 249 Hence ∂G ∂δ (0, 0)δ′(0) + ∂G ∂s (0, 0) = 0 , which implies δ′(0) = 0. To show (5.6) we need δ′′(0). Differentiating (5.8) with respect to s again and setting s = 0, we get ∂G ∂δ (0, 0)δ′′(0) + ∂2G ∂s2 (0, 0) = 0 . We have ∂2G ∂s2 (0, 0) = p(p −1) Z C u p−2 a w2 1 dµ and ∂G ∂δ (0, 0) = p Z C u p−1 a w0 dµ . Thus, δ′′(0) = −p(p −1) R C v p−2 a w2 1 dµ p R C v p−1 a w0 dµ . Now, F(va + δ(s)w0 + sw1) = F(va) + s2 Z C |∇w1|2 + λ2w2 1 dµ + 2δ(s) Z C ∇va · ∇w0 + λ2vaw0 dµ + 2s Z C ∇va · ∇w1 + λ2vaw1 dµ + δ2(s) Z C |∇w0|2 + λ2w2 0 dµ + 2sδ(s) Z C ∇w0 · ∇w1 + λ2w0w1 dµ . Since va is radial, Z C ∇va · ∇w1 + λ2vaw1 dµ = Z C v p−1 a w1 dµ = 0 ; therefore the fourth term is 0. Also, the fifth and the sixth terms are higher order. Hence F(va + δ(s)w0 + sw1) = F(va) + s2 Z C |∇w1|2 + λ2w2 1 dµ + 2δ(s) Z C ∇va∇w0 + λ2vaw0 dµ + o(s2) . From (5.5) we get Z C |∇w1|2 + λ2w2 1 dµ = (p −1) Z C v p−2 a w2 1 dµ + µ1 Z C w2 1 dµ. 250 F. CATRINA AND Z.-Q. WANG Since va is a solution of (2.3), we have Z C ∇va · ∇w0 + λ2vaw0 dµ = Z C v p−1 a w0 dµ. Using the equalities above and δ(s) = −s2 (p −1) R C v p−2 a w2 1 dµ 2 R C v p−1 a w0 dµ + o(s2) , we obtain for s sufficiently small F(va + δ(s)w0 + sw1) = F(va) + s2µ1 Z C w2 1 dµ + o(s2) < F(va) . The proof of Lemma 5.1 is complete. PROOF OF THEOREM 1.3(ii): First we note that by a direct computation using (2.13) we always have for all a < 0 R(a, a) > S(a, a) = S(0, 0) . We argue that for any a0 < 0, there is ε0 > 0 such that for all |(a, b)−(a0, a0)| < ε0 with a < b, S(a, b) is achieved by a nonradial function. As (a, b) →(a0, a0), we have that R(a, b) →R(a0, a0) > S(a0, a0) = S(0, 0). On the other hand, from Theorem 1.1(i) we have that S(a, b) →S(a0, a0) as (a, b) →(a0, a0). Therefore for any a0 < 0, there is ε0 > 0 such that S(a, b) < R(a, b) if |(a, b) −(a0, a0)| < ε0 with a ≤b. By Theorem 1.2(i), S(a, b) is achieved, and due to the strict inequality, the minimizer for S(a, b) is nonradial. 6 Symmetry of Solutions We use the moving plane method to show that for a ≤b < a + 1 any positive solution of (2.3) on the cylinder C is symmetric about some t = const, so up to a translation in the t-direction, the solution is even in t and satisfies the monotonicity property. Together with the discussion in Section 2, we get that any solution of (1.8) satisfying u(x) > 0 for x ∈RN {0}, up to a dilation (2.4), satisfies the modified inversion symmetry in Theorem 1.4. Our argument follows closely the method in though we have a differential equation defined on a manifold C, while in equations in RN were treated. Let v be a positive solution of (2.3). For µ < 0 and x = (t, θ) ∈C, denote xµ = (2µ −t, θ) ∈C, the reflection of x relative to the hyperplane t = µ. We let wµ(x) = v(xµ) −v(x) , a function defined on the region 6µ = {(t, θ) ∈C : t < µ}. Clearly, w(x) = 0 for any x ∈Tµ = ∂6µ = {(t, θ) ∈C : t = µ}. We have the following: LEMMA 6.1 There is R0 > 0 independent of µ such that if wµ has a negative local minimum at (t0, θ0), then |t0| ≤R0. CAFFARELLI-KOHN-NIRENBERG INEQUALITIES 251 PROOF: First, by elliptic regularity theory and the fact that Z τ≤t≤τ+1 v2∗dµ →0 as |τ| →∞, we have v(t, θ) →0 as |t| →∞. Then we take R0 to be such that v(t, θ) <  λ2 p −1  1 p−2 for all |t| ≥R0. Since v is a solution of (2.3), wµ satisfies (6.1) −1wµ + λ2wµ −a(x)wµ = 0 in 6µ, where a(x) = (p −1) Z 1 0 [u(x) + s(u(xµ) −u(x))]p−2 ds . Assume x0 = (t0, θ0) ∈6µ is a minimum such that wµ(x0) < 0 and |t0| > R0. Then v(xµ 0 ) < v(x0) <  λ2 p −1  1 p−2 . Therefore, (6.2) a(x0) < λ2 . Since 1wµ(x0) ≥0, we obtain λ2wµ(x0) −a(x0)wµ(x0) ≥0 , which means λ2 ≤a(x0), contradicting (6.2). We shall need the following: Maximum Principle. If wµ is nonnegative solution of (6.1) and wµ is zero at some point in 6µ, then wµ ≡0. Hopf Lemma. If wµ is positive on 6µ, then ∂wµ/∂t < 0 at any point on Tµ. PROOF OF THEOREM 1.4: Since for t →−∞we have wµ(t, θ) →0 and w(x) = 0 for all x ∈Tµ, Lemma 6.1 implies wµ(x) ≥0 for x ∈6µ with all µ ≤−R0. Let µ0 be the largest µ with the property that wµ is nonnegative on 6µ. Clearly such µ0 exists since v(t, θ) →0 as t →∞. We argue that wµ(x) > 0 for x ∈6µ , µ < µ0 , (∗) wµ0 ≡0 on 6µ0 . (∗∗) Since wµ ≥0 for all µ < µ0, it follows that vt ≥0 for all t ≤µ0. To prove (∗), assume there is δ > 0 such that for some (t0, θ0), we have t0 < µ0 −δ and 252 F. CATRINA AND Z.-Q. WANG wµ0−δ(t0, θ0) = 0. By the maximum principle it follows that wµ0−δ ≡0. This implies that v(µ0 −2δ, θ0) = v(µ0, θ0). Since ∂v/∂t ≥0, it follows that ∂v ∂t (t, θ0) = 0 for all t ∈[µ0 −2δ, µ0] . Therefore ∂wµ0−2δ ∂t (µ0 −2δ, θ0) = 0 . By the Hopf lemma we get wµ0−2δ ≡0. Continuing in this fashion, we obtain that v is independent of t, which is not possible. Therefore, ∂wµ/∂t < 0 on Tµ for µ < µ0 and then vt > 0 on 6µ. For (∗∗), assume wµ0 ̸≡0. By the maximum principle and the Hopf lemma, wµ0 > 0 on 6µ0 and ∂wµ0/∂t < 0 on Tµ0. From the definition of µ0, there is a sequence µk ↘µ0 and there are points xk ∈6µk, absolute minima for wµk, such that wµk(xk) < 0. By Lemma 6.1 we have that (xk) is a bounded sequence; hence (by passing to a subsequence) we can assume it converges to some point x0. It follows that x0 ∈Tµ0 and wµ0,t(x0) = 0, which is a contradiction. Eventually after a translation in the t-direction, we can assume µ0 = 0. There-fore v is even in t and monotonically decreasing for t > 0. Translations in t on C correspond to dilations in RN; hence up to a dilation u(x) →τ N−2−2a 2 u(τx), positive solutions of (1.8) have the modified inversion sym-metry as given in Theorem 1.4. 7 The Cases N = 1 and N = 2 7.1 The Case N = 1 In one dimension, equation (1.8) becomes (7.1) −(|x|−2au′)′ = |x|−bpu p−1 , u ≥0 , in R. We have a rather complete answer for the problem. In fact, we can identify all solutions of (7.1). We look for solutions u that are critical points for the energy in D1,2 a (R) Ea,b(u) = R R |x|−2a|u′|2 dx R R |x|−bp|u|p dx 2/p . The parameter range is a < −1 2 , a + 1 2 < b ≤a + 1 , and p = 2 −1 + 2(b −a) . We first observe that Ea,b(u) is invariant under the following rather nonstandard dilations: for (τ−, τ+) ∈(0, ∞)2 (7.2) u(x) →uτ−,τ+(x) =    τ −1+2a 2 − u(τ−x) , x < 0 , τ −1+2a 2 + u(τ+x) , x > 0 . CAFFARELLI-KOHN-NIRENBERG INEQUALITIES 253 That is, dilations can be made independently for x < 0 and x > 0 so that Ea,b(u) is still invariant. Note that for N = 1 the cylinder C = R × S0 = R ∪R is the union of two real lines. We denote the two components by C−and C+ corresponding to R−and R+ in R, respectively. The coordinates for C−and C+ are y = (t, −1) ∈C−and y = (t, 1) ∈C+. For simplicity, we write them as t1 (for (t, −1)) and t2 (for (t, 1)). To be more precise, for a function w(y) defined on C we write w(y) = w1(t1) when y = t1 ∈C−and w(y) = w2(t2) when y = t2 ∈C+. To a function u ∈D1,2 a (R), we associate a function w (corresponding to a pair of w1, w2) defined on C by (7.3) u(x) = (−x)(1+2a)/2w1(−ln(−x)) for x < 0 , u(x) = x(1+2a)/2w2(−ln x) for x > 0 , and t1 = −ln |−x| for x < 0 and t2 = −ln x for x > 0. Then equation (7.1) is equivalent to the system of autonomous equations: for i = 1, 2, (7.4) −d2wi dt2 i + 1 + 2a 2 2 wi = |wi|p−2wi . Critical points of Ea,b(u) on D1,2 a (R) now correspond to critical points of a new energy functional on H 1(C) Fa,b(w) = R C |∇w|2 + 1+2a 2 2|w|2 dµ R C |w|p dµ 2/p , w ∈H 1(C) . It is easy to see that both integrals in the numerator and the denominator are de-coupled as two integrals for w1 and w2. Each of the two ODEs of (7.4) has the zero solution, and according to (2.7) with λ = −(1 + 2a)/2, the only (positive) homoclinic solutions are translates of v(t) =  (1 + 2a)2 4(1 −2(1 + a −b))  1−2(1+a−b) 4(1+a−b)  cosh (1 + 2a)(1 + a −b) 1 −2(1 + a −b) t −1−2(1+a−b) 2(1+a−b) . (7.5) The minimizers of Fa,b(w) are achieved by w, for which one of two components w1 or w2 is identically zero and the other is a translate of v(t) given above. According to (2.9), the infimum is S(a, b) = (−1 −2a)2(b−a) 22(1+a−b)(−1 + 2(b −a))−1+2(b−a)(1 + a −b)2(1+a−b)   02  1 2(1+a−b)  0 1 1+a−b    2(1+a−b) . (7.6) 254 F. CATRINA AND Z.-Q. WANG We observe that as b ↘a + 1 2, we obtain S(a, b) →−1 −2a. Note that when both w1 and w2 are nonzero and are (possibly different) translates of v(t) in (7.5) we get the energy Fa,b(w) to be higher R(a, b) = 22(1+a−b)S(a, b), which is the least energy in the radial class. On this energy level, there is a two-parameter family of positive solutions according to the two parameters that control by how much w1 and w2 are translated from (7.5). Correspondingly, u(x) defined in (7.3) is a two-parameter family of solutions for (7.1), which after a dilation given by (7.2) for some (τ−, τ+) ∈(0, ∞)2 is radial in R. Summarizing all these, we can state the main results for N = 1 now. THEOREM 7.1 (Best Constants and Nonexistence of Extremal Functions) (i) S(a, b) is continuous in the full parameter domain. (ii) For b = a + 1, we have S(a, a + 1) = 1+2a 2 2, and S(a, a + 1) is not achieved. (iii) For b → a + 1 2 +, we get S(a, b) →−1 −2a. THEOREM 7.2 (Best Constants and Existence of Extremal Functions) For a + 1 2 < b < a + 1, S(a, b) is explicitly given in (7.6), and up to a dilation of the form (7.2) it is achieved at a function of the form (7.3) with either w1 = 0 and w2 given by (7.5), or vice versa. Consequently, the minimizer for S(a, b) is never radial. THEOREM 7.3 (Bound State Solutions and Symmetry) Up to a dilation (7.2), the only solution of (7.1) besides the ground state solutions is of the form of (7.3) with both w1 and w2 given by (7.5). Consequently, all bound state solutions of (7.1), possibly after a dilation given in (7.2), satisfy the modified inversion symmetry. REMARK 7.4 Due to the degeneracy, the ground state solutions are discontinuous at 0 and identically zero in either R−or R+. 7.2 The Case N = 2 In this case the parameter range is −∞< a < 0 , a < b ≤a + 1 , and p = 2 b −a . With no changes in the proofs for the case N ≥3, we have the following results. THEOREM 7.5 (Best Constants and Nonexistence of Extremal Functions) (i) S(a, b) is continuous in the full parameter domain. (ii) For b = a + 1, we have S(a, a + 1) = a2, and S(a, a + 1) is not achieved. THEOREM 7.6 (Best Constants and Existence of Extremal Functions) (i) For a < b < a + 1, S(a, b) is always achieved. (ii) For b −a ∈(0, 1) fixed, as a →−∞, S(a, b) is strictly increasing, and S(a, b) = (−a)2(b−a)[Sp(R2) + o(1)] . One notes in (5.3) that for N = 2 we have a0 = 0. Therefore we also have the following: CAFFARELLI-KOHN-NIRENBERG INEQUALITIES 255 THEOREM 7.7 (Symmetry Breaking) There is a function h(a) defined for a < 0, satisfying a < h(a) < a + 1 for a < 0 and a + 1 −h(a) →0 as −a →∞, such that for any (a, b) satisfying a < 0 and a < b < h(a), the minimizer for S(a, b) is nonradial. THEOREM 7.8 (Symmetry Property) For a < b < a + 1, the minimizer of S(a, b), possibly after a dilation u(x) →τ −au(τx), satisfies the modified inversion sym-metry: u  x |x|2  = |x|−2au(x) . 8 A Related Variational Problem In this section we shall consider a related problem that can be solved by using our method and the results we obtained in the previous sections. For 0 ≤a < (N −2)/2, special cases of the following problem were considered in and : For N ≥3, we consider the following problem: (8.1) −div(|x|−2a∇w) + γ |x|−2(1+a)w = |x|−bpw p−1 , u ≥0 , in RN , where a < N −2 2 , a ≤b < a + 1 , γ > −  N −2 −2a 2 2 , p = 2N N −2 + 2(b −a) . The solutions in D1,2 a (RN) of this problem are critical points of Ea,b,γ (u) = R RN |x|−2a|∇u|2 + γ |x|−2(1+a)u2 dx R RN |x|−bp|u|p dx 2/p . PROPOSITION 8.1 The solutions in D1,2 a (RN) of (8.1) are in one-to-one corre-spondence to solutions in D1,2 ¯ a (RN) of −div(|x|−2¯ a∇u) = |x|−¯ bpu p−1 , u ≥0 , in RN , where ¯ a = a + λ − p λ2 + γ , ¯ b = b + λ − p λ2 + γ , λ = N −2 −2a 2 . This correspondence is given by u(x) = |x|λ−√ λ2+γ w(x) . Direct computations verify the proof, which we omit here. Due to this proposition, we can put equation (8.1) in the same frame of work as in (1.8), and we can translate all of our results for (1.8) to get corresponding results for (8.1). We note that even in the a-nonnegative region, for γ sufficiently large 256 F. CATRINA AND Z.-Q. WANG the minimizer of Ea,b,γ (u) is nonradial. All of our main theorems are adapted in the obvious way. We leave the statements of these results to the reader. REMARK 8.2 Proposition 8.1 also holds for N = 1 and N = 2 with a and b in the corresponding regions. REMARK 8.3 For 0 ≤a < N−2 2 , special cases of (8.1) were considered in (a = b = 0 and −S(0, 1) < γ < 0) and (a ≤b < a + 1 and −S(a, a + 1) < γ < 0, a < b < a + 1 with γ > 0, and 0 < a = b with 0 < γ ≪1), but only compactness of minimizing sequences was given. 9 Final Remarks and Questions We finish the paper with some remarks and related open questions. First, we have given the best constants on the boundary of the a-negative region. In view of Theorem 1.3, it seems that there are no closed form minimizers. An interesting question here is, what are the best constants in the interior of the a-negative region? Another question is, in view of Theorem 1.3, what are the optimal parameter values at which the symmetry breaking exactly occurs, namely, the optimal form of h(a)? Our analysis indicates that the radial solutions get more and more unstable as a →−∞, and this suggests there should be more and more nonradial solutions. We have studied this in . Some of the results in this paper as well as those of have been announced in . Finally, an interesting question is related to the cases N = 1 and N = 2. With regard to the Caffarelli-Kohn-Nirenberg inequalities, what are the optimal spaces for N = 1 with b = a + 1 2 and for N = 2 with b = a? After this paper was submitted, M. Willem kindly informed us of his preprint and another reference that contain related results to our Theorem 1.1(iii) and 1.2(ii) by using different methods. Acknowledgment. The authors would like to thank L. Nirenberg for his en-couragement and pointing out some references in the preparation of the paper. Bibliography Aubin, T. Problèmes isopérimétriques de Sobolev. J. Differential Geometry 11 (1976), no. 4, 573–598. Berestycki, H.; Esteban, M. Existence and bifurcation of solutions for an elliptic degenerate problem. J. Differential Equations 134 (1997), no. 1, 1–25. Brezis, H.; Lieb, E. H. A relation between pointwise convergence of functions and convergence of functionals. Proc. Amer. Math. Soc. 88 (1983), no. 3, 486–490. Brezis, H.; Nirenberg, L. Positive solutions of nonlinear elliptic equations involving critical exponents. Comm. Pure Appl. Math. 36 (1983), no. 4, 437–477. Caffarelli, L. A.; Gidas, B.; Spruck, J. Asymptotic symmetry and local behavior of semilinear elliptic equations with critical Sobolev growth. Comm. Pure Appl. Math. 42 (1989), no. 3, 271– 297. CAFFARELLI-KOHN-NIRENBERG INEQUALITIES 257 Caffarelli, L. A.; Kohn, R.; Nirenberg, L. First order interpolation inequalities with weights. Compositio Math. 53 (1984), no. 3, 259–275. Caldiroli, P.; Musina, R. On the existence of extremal functions for a weighted Sobolev embed-ding with critical exponent. Cal. Var. Partial Differential Equations 8 (1999), no. 4, 365–387. Catrina, F.; Wang, Z.-Q. On the Caffarelli-Kohn-Nirenberg inequalities. C. R. Acad. Sci. Paris Sér. I Math. 330 (2000), no. 6, 437–442. Catrina, F.; Wang, Z.-Q. Positive bound states having prescribed symmetry for a class of non-linear elliptic equations in RN . Ann. Inst. H. Poincaré Anal. Non Linéaire, in press. Chen, W.; Li, C. Classification of solutions of some nonlinear elliptic equations. Duke Math. J. 63 (1991), no. 3, 615–622. Chou, K. S.; Chu, C. W. On the best constant for a weighted Sobolev-Hardy inequality. J. London Math. Soc. (2) 48 (1993), no. 1, 137–151. Dautray, R.; Lions, J.-L. Mathematical analysis and numerical methods for science and tech-nology. Vol. 1. Physical origins and classical methods. Spinger, Berlin, 1985. Davies, E. B. A review of Hardy inequalities. Preprint. Gidas, B.; Ni, W. M.; Nirenberg, L. Symmetry of positive solutions of nonlinear elliptic equa-tions in RN . Mathematical analysis and applications, Part A, 369–402. Advances in Mathe-matics Supplemetary Studies, 7a. Academic Press, New York–London, 1981. Hardy, G. H.; Littlewood, J. E.; Pólya, G. Inequalities. Second edition. Cambridge, University Press, 1952. Horiuchi, T. Best constant in weighted Sobolev inequality with weights being powers of dis-tance from the origin. J. Inequal. Appl. 1 (1997), no. 3, 275–292. Korevaar, N.; Mazzeo, R.; Pacard, F.; Schoen, R. Refined asymptotics for constant scalar cur-vature metrics with isolated singularities. Invent. Math. 135 (1999), no. 2, 233–272. Lieb, E. H. Sharp constants in the Hardy-Littlewood-Sobolev and related inequalities. Ann. of Math. (2) 118 (1983), no. 2, 349–374. Lin, C. S. Interpolation inequalities with weights. Comm. Partial Differential Equations 11 (1986), no. 14, 1515–1538. Lin, S.-S. Existence of positive nonradial solutions for nonlinear elliptic equations in annular domains. Trans. Amer. Math. Soc. 332 (1992), no. 2, 775–791. Lions, P.-L. The concentration-compactness principle in the calculus of variations, The locally compact case, Part 1 and Part 2. Ann. Inst. H. Poincaré Anal. Non Linéaire. 1 (1984), 109–145, 223–283. Lions, P.-L. Concentration compactness principle in the calculus of variations. The limit case. I, II. Rev. Mat. Iberoamericana 1 (1985), no. 1, 145–201; 1 (1985), no. 2, 45–121. Talenti, G. Best constant in Sobolev inequality. Ann. Mat. Pura Appl. (4) 110 (1976), 353–372. Wang, Z.-Q. Existence and symmetry of multi-bump solutions for nonlinear Schrödinger equa-tions. J. Differential Equations 159 (1999), no. 1, 102–137. Wang, Z.-Q.; Willem, M. Singular minimization problems. J. Differential Equations 161 (2000), no. 2, 307–320. Willem, M. Minimax theorems. Progress in Nonlinear Differential Equations and Their Appli-cations, 24. Birkhäuser, Boston, 1996. Willem, M. A decomposition lemma and critical minimization problems. Preprint. 258 F. CATRINA AND Z.-Q. WANG FLORIN CATRINA ZHI-QIANG WANG Utah State University Utah State University Department of Mathematics Department of Mathematics and Statistics and Statistics 3900 Old Main Hill 3900 Old Main Hill Logan, Utah 84322-3900 Logan, Utah 84322-3900 E-mail: sl9qg@math.usu.edu E-mail: wang@math.usu.edu Received February 2000.
188829
https://amydiduch.weebly.com/elasticity.html
ECON 101: THE BASICS Home About Production Possibilities Demand Supply & Equilibrium Comparative Statics Price floors, ceilings and taxes Algebra of Supply & Demand Elasticity Marginal Analysis & Profit Maximization SR Production & Labor Demand Perfect Competition Monopoly Oligopoly & Game Theory Math Basics Price Elasticity of Demand (with a few notes on Price Elasticity of Supply)Dr. Amy McCormick Diduch Elasticity is a measure of responsiveness to a change. Elasticity values can help firms decide how to set prices or help governments decide which goods or services to tax. This tutorial covers the calculation of price elasticity of demand; however, there are many other elasticity calculations that are highly useful to economic analysis. Price elasticity of demand measures the responsiveness of quantity demanded to a change in price. Quantity demanded is highly elastic if it changes dramatically in response to a price change. (Highly elastic demand is like a very stretchy rubber band – it can easily change position). Quantity demanded is highly inelastic if it changes very little in response to a price change. (Highly inelastic demand is like a very stiff rubber band – it is hard to change its position very much).1. Calculating elasticity of demand ​It doesn’t take too much searching to find newspaper articles that contain a statement such as “diamond prices rose 15% last year, resulting in a 10% decline in net sales.” Assuming that the price increase is the only reason for the decline in quantity demanded, we can use these figures to calculate the price elasticity of demand, which tells us whether the change in quantity demanded was “large” or “small” in importance. Price elasticity of demand = ED = % change in quantity demanded % change in price (Or, price elasticity of demand = percentage change in quantity demanded DIVIDED BY percentage change in price).For the hypothetical diamond example above, the price elasticity of demand would be: ED = -10% change in quantity demanded / 15% change in price = │-0.67│ = 0.67 (Note that I’ve expressed this result in absolute value terms. The negative sign reflects the negative relationship between price and quantity along a demand curve. It doesn’t provide us with additional useful information in this example so we are safe to ignore it and use the absolute value). How do we interpret the resulting value? In our diamond example, notice that the percentage change in price was larger than the resulting percentage change in quantity demanded. In other words, the size of the quantity response was small relative to the size of the price change, resulting in an elasticity value less than 1.Demand is described as inelastic if the price elasticity of demand is less than 1. Inelastic demand indicates the change in quantity demanded is small relative to the change in price. Inelastic demand = SMALLER % change in quantity demanded LARGER % change in price Goods and services that are likely to have inelastic demand are those that (1) have very few good substitutes, (2) are considered to be necessities, (3) take a very small share of the consumer’s budget or (4) the consumer has very little time to adjust behavior in response to the price change.Demand is described as elastic if the price elasticity of demand is greater than 1. Elastic demand indicates the change in quantity demanded is large relative to the change in price. Elastic demand = LARGER % change in quantity demanded SMALLER % change in price Goods and services that are likely have elastic demand are those that (1) have many good substitutes (including competing brands of the product), (2) are considered to be luxuries, (3) take a fairly large share of the consumer’s budget or (4) products for which the consumer has plenty of time to adjust their behavior in response to the price change. Demand is described as unit elastic if the price elasticity of demand equals 1. Suppose price increased by 15% but quantity demanded did not change at all?In this case the price elasticity of demand would be 0%/15% = 0; we describe this as perfectly inelastic demand. Here, price doesn’t matter to the consumer: they will continue to purchase their desired amount even when prices change significantly. The graph below illustrates a perfectly inelastic demand curve: if price increases from P1 to P2, quantity demanded does not change at all. Demand will be perfectly inelastic (or close to perfectly inelastic) for goods that are considered to be necessities. ​Suppose price increased by a very small amount (say, 0.1%) and quantity demanded decreased by a very large amount (say, 99.9%). Demand is nearly perfectly elastic in this instance. Price is the only thing that matters; if price increases at all, people choose not to buy the product (or switch to competitor brands). Other examples:Suppose that a 13% increase in the price of electricity results in a 5% decline in electricity usage. The price elasticity of demand would be │-5% / 13%│ = 0.345, which is inelastic. Suppose that a 5% decrease in the price of admission to an amusement park resulted in a 17% increase in park attendance. The price elasticity of demand would be │17% / - 5%│ = 3.4, which is elastic. Suppose that a 9% decrease in the price of roller skates resulted in a 9% increase in sales. The price elasticity of demand would be│ 9 % / -9 %│ = 1.0, which is unit elastic. 2. Point elasticity of demandIf you know the demand schedule for a product (or have an equation for the demand curve), you can calculate elasticity values for any price and quantity combination along the demand curve using one of two formulas: point elasticity or arc elasticity. If the demand curve is a straight line, the point elasticity calculation is simple and more precise. It requires two steps: Step 1: calculate the slope of the demand curve. Step 2: For any given price and quantity combination along this demand curve, calculate the following: ED = Price elasticity of demand = │ 1/slope │ P/Q This is known as the point elasticity formula. We’ll use the absolute value of the inverse of the slope. ​(As a side note, the formula derived directly from the definition of price elasticity of demand, which can be written as ∆Q/Q / ∆P/P. This is a fraction divided by a fraction; rearrange to get ∆Q/Q P/∆P. Rearrange again to get ∆Q/∆P P/Q or 1/slope P/Q). We’ll demonstrate with the following demand schedule (which can be represented by the linear equation P = 60- ½ Q). | | | --- | | | | ​First, find the slope of this line and then find its inverse. Slope is calculated as (change in value along y-axis) / (change in value along x-axis) or “rise” / “run” and can be calculated between any two points on a straight line. Suppose we want to calculate slope between points A and B. Price declines from 60 to 55 when we move from A to B, so the change along the y-axis is -5. Quantity increases from 0 to 10 so the change along the x-axis is 10. The slope from A to B, then, is -5 / 10 = -1/2. (We’ll use the absolute value of this). Notice that when we already have the equation for the line (in the form y = b + mx or, here, P = 60 - ½ QD), we can read the slope directly from the equation. The inverse of the slope is just 1/slope or 1 / (½) = 2. So for this example, the point elasticity formula is ED = 2 P/Q An important detail: the slope of this demand curve is constant (and therefore the inverse of the slope is constant) but the price elasticity of demand will be different at each price and quantity combination. Second, choose a price and quantity combination and calculate price elasticity of demand: When price is $55, quantity demanded is 10. ED = │1/slope│ P/Q = 2 55/10 = 110/10 = 11, which reflects very elastic demand at this price. When price is $40, quantity demanded is 40. ED = │1/slope│ P/Q = 2 40/40 = 2, which is elastic demand. When price is $30, quantity demanded is 60. ED = │1/slope│ P/Q = 2 30/60 = 1, which is unitary elastic. When price is $20, quantity demanded is 80. ED = │1/slope│ P/Q = 2 20/80 = ½ or 0.5, which reflects inelastic demand. As we move along the demand curve from high prices to low prices, the elasticity value gets smaller and smaller. We will find this relationship along any downward-sloping linear demand curve. The graph below illustrates the entire demand curve (from the equation P = 60 – ½ QD). The numbers given above the demand curve are the elasticity calculations for these price and quantity combinations. Details to notice: (1) demand is elastic at higher prices and becomes more inelastic as prices fall and (2) demand is unit elastic at the midpoint of a downward-sloping linear demand curve. Thus, even if we know a good is a luxury, say, we can’t be certain that it has elastic demand. The exact elasticity value will depend on its current price. ​3. Arc elasticity of demand​Suppose we do not have information about the entire demand curve relationship or cannot determine the slope at a particular price and quantity combination. We can still calculate an approximate elasticity value if we know the starting and ending price and quantity combinations. The arc elasticity formula requires a few steps (none of which is mathematically difficult) and is illustrated in the graph below. Suppose that price increases from $10 to $12, causing quantity to fall from 20 to 16. Calculate the percentage change in quantity as (Q2 – Q1) / (Q2 + Q1)/2 (which is the change in Q divided by the average value of Q). Calculate the percentage change in price as (P2 – P1) / (P2 + P1)/2 (which is the change in P divided by the average value of P). Divide the percentage change in quantity by the percentage change in price. Interpret the result according to whether the value is greater than, less than or equal to 1. 4. Elasticity and total revenue​Suppose you have the following information about the demand schedule for yoga classes (from the first two columns in the table below). ​The third column of this table provides the price elasticity of demand at each of the possible prices. (To calculate elasticity: the slope of this demand curve is 1/20. Therefore, to calculate the point elasticity, you take 1/ slope, which equals 20, and multiply it by P/Q. When price is 10, P/Q is 1/8. Multiply by 20 to get the elasticity value of 2.5 given in the third column). The total revenue earned by a business is equal to the price charged for the good or service multiplied by the quantity sold at that price. If a business knows (at least roughly) the demand schedule for its product, it can estimate the total revenue it would earn for any price it might choose. As it turns out, a business doesn’t have to know its demand schedule exactly to be able to figure out how a price increase or decrease would affect its total revenue. A reasonable estimate of price elasticity of demand at the current price level will provide this information. (By the way, this is one of the many reasons why knowledge of economics is helpful to business managers! A good manager, however, will not simply seek to maximize total revenue. She must take costs into consideration as well.) The same information is useful for government policy decisions. Knowing whether demand is elastic or inelastic will allow policymakers to estimate, for example, the effect of a change in an entrance fee or toll on the total revenue received by the government. Here are the important relationships: When demand is elastic, a price increase will result in a reduction in total revenue and a price decrease will result in an increase in total revenue. WHY? Because elastic demand indicates that people change their quantity demanded significantly in response to a price change. When price goes up, they buy a lot less of the product, so total revenue goes down. When price goes down, they buy a lot more of the product, so total revenue goes up. When demand is inelastic, a price increase will result in an increase in total revenue and a price decrease will result in a decrease in total revenue. WHY? Because inelastic demand indicates that people do not change quantity demanded significantly in response to a price change. Thus, when price goes up, people don’t reduce quantity demanded by much at all, so total revenue goes up. When price goes down, people don’t increase quantity demanded very much, so total revenue goes down. Suppose you are currently charging a price of $9. From the table above, we see that the price elasticity of demand is 1.8. With just this information (the price and the elasticity value) we can determine that a price decrease would lead to an increase in total revenue.Suppose that the current price of the membership is $5 and you know the price elasticity of demand is inelastic (at 0.56). With just this information, you know that if you increase price you will also increase total revenue. When demand is unit elastic, a price change will have no impact on total revenue. WHY? Because unit elastic means that the % change in quantity demanded exactly matches the % change in price so the product of (price quantity) remains the same. If you are currently charging a price of $7 and find the price elasticity of demand to be 1.0, you know that making small changes in price will not significantly change total revenue. 5. Price elasticity of supply​Although there is (generally) a positive relationship between quantity supplied and price, the extent to which quantity supplied increases in response to a price change varies with the amount of time available and the degree to which suppliers have flexibility in production. Price elasticity of supply is calculated using the same tools as price elasticity of demand. (The point and arc formulas work for supply curves the same way they do for demand curves). Elasticity values are still described as inelastic (for values less than 1) or elastic (for values greater than 1). Supply will be perfectly inelastic when it is not possible to change the quantity supplied. (An example would be the number of seats in a basketball arena). Supply will be perfectly elastic when a supplier is content to supply any quantity needed at the going price but would not supply any of the product if price were to fall. The diagrams below illustrate these two extremes. ​The file below contains practice problems on elasticity. | | | practice_problems_for_price_elasticity_of_demand.pdf | | File Size: | 524 kb | | File Type: | pdf | Download File ​Want to see step-by-step demonstrations of these concepts? These videos cover the same material: Powered by Create your own unique website with customizable templates. Home About Production Possibilities Demand Supply & Equilibrium Comparative Statics Price floors, ceilings and taxes Algebra of Supply & Demand Elasticity Marginal Analysis & Profit Maximization SR Production & Labor Demand Perfect Competition Monopoly Oligopoly & Game Theory Math Basics
188830
https://www.healthline.com/health/high-cholesterol/familial-hypertriglyceridemia
Health Conditions Condition Spotlight Wellness Topics Product Reviews Featured Programs Featured Lessons Newsletters Lifestyle Quizzes Health News This Just In Top Reads Video Series Find Your Bezzy Community Bezzy communities provide meaningful connections with others living with chronic conditions. Join Bezzy on the web or mobile app. Follow us on social media Can't get enough? Connect with us for all things health. Related Hubs Heart Health Your hub for heart health management, insights, and solutions High Cholesterol Help with mastering all the basics of high cholesterol Related Topics Symptoms Related Articles Diagnosis Related Articles Treatment Related Articles Statins Related Articles Causes & Risk Factors Related Articles Related Conditions Related Articles Prevention Related Hub Prevention and Lifestyle Tips for a heart-healthy lifestyle Related Articles Diet Related Hub What to Eat Heart friendly nutrition tips and tools Related Articles Management Related Hub Heart Health 101 Learn the basics to keeping your heart healthy Related Articles Complications Related Articles Types Related Articles What to Know About Familial Hypertriglyceridemia Familial hypertriglyceridemia is a genetic condition that can cause high levels of triglycerides in the blood, which can lead to other health issues like heart disease and stroke. High triglycerides can have many causes, including weight, diet, and other factors. It can also be caused by genetics. When it’s an inherited condition, it’s known as familial hypertriglyceridemia. Triglycerides are a type of waxy fat that’s found in your blood. Your body makes triglycerides, which you also get from the food you eat. When you eat, any extra calories and sugar you don’t need at the time are converted into triglycerides and stored in fat cells. Later on, when you need energy, hormones will release the stored triglycerides. You need a certain amount of triglycerides for energy, but having a level that’s too high can put you at risk for a variety of health issues. In this article, we’ll look at high triglycerides caused by genetic factors, how this condition is diagnosed and treated, and how to lower your risk of complications. What is familial hypertriglyceridemia? Hypertriglyceridemia (or high triglycerides) results from the overproduction of very low-density lipoproteins (VLDL), which cause more triglycerides in the blood. There are many potential causes of high triglycerides. Familial hypertriglyceridemia is specifically caused by genetics and is passed down in families. But other factors can influence the severity of your high triglycerides, such as your: There are two types of hypertriglyceridemia: Familial hypertriglyceridemia is estimated to affect approximately 1 out of every 100 people in the United States. The importance of knowing your family history Familial hypertriglyceridemia usually doesn’t cause symptoms unless it’s severe enough to lead to another health condition. Because it usually doesn’t cause symptoms, it’s especially important to know your family history. To understand your risk factor for familial hypertriglyceridemia, it’s important to know if one or more people in your family have: If your family history indicates that you may be at risk for high triglycerides, talk with your doctor about testing options. Knowing your family history will help your doctor make sure you get the right screening. Even if you don’t have a family history of high triglycerides, this condition can occur without risk factors and without warning. Therefore, it’s important to get your triglyceride levels checked every 4 to 6 years. If you have risk factors for high triglycerides, like smoking or being overweight, ask your doctor about getting your triglyceride levels checked more often. What’s involved with diagnosis? To diagnose high triglycerides, your doctor will perform a physical exam and ask about symptoms you’re having, as well as your family history. Then, they’ll order blood tests to check for elevated levels of triglycerides. Your doctor may recommend that you fast for 9 to 12 hours before the test to get an accurate reading of the fat levels in your blood. The test itself is a quick, regular blood draw. If you have high triglycerides, your doctor may try to determine the underlying cause. For instance, high triglycerides may be caused by: If you have a family history of high triglycerides or heart disease but none of the above-mentioned underlying causes, your doctor will usually be able to diagnose familial hypertriglyceridemia based on your history. What’s considered a high triglyceride level? It’s important to understand your triglyceride levels. This will help you monitor your condition and how it’s progressing. Here’s a summary of how triglyceride levels are classified: | | Adults | Children 10–19 | Children under 10 | --- --- | | Normal | under 150 mg/dl | under 90 mg/dl | under 75 mg/dl | | Borderline high | 150–199 mg/dl | 90–129 mg/dl | 75–99 mg/dl | | High | 200–499 mg/dl | over 130 mg/dl | over 100 mg/dl | | Very high | over 500 mg/dl | n/a | n/a | Can high triglycerides lead to complications? If not treated, high triglycerides can lead to complications, including: What can you do to lower your triglycerides? If you have familial hypertriglyceridemia, it may be more difficult to lower your triglycerides because the condition is genetic. However, there are still lifestyle changes you can make to lower the risk of complications due to high triglycerides. These changes include: Other treatment options While lifestyle changes can be an effective way to keep high triglycerides under control, medication can also be prescribed as first-line treatment. This may be particularly helpful if your triglyceride levels are high, or if lifestyle changes don’t lower your triglyceride levels enough. One of the most common types of medication for familial hypertriglyceridemia is fibrate. This medication is especially effective for people at a higher risk of pancreatitis. Other supplements and medications that may help lower triglyceride levels include: The bottom line When high triglycerides are caused by genetics, it’s known as familial hypertriglyceridemia. With this condition, there is too much of a type of fat (lipid) in your blood known as VLDL. High triglyceride levels from any cause, whether due to genetics or lifestyle, can lead to complications such as cardiovascular disease, blood clots, or pancreatitis. However, high triglyceride levels typically don’t have symptoms. That’s why it’s important to know your family history and to talk with your doctor about getting tested for familial hypertriglyceridemia if it runs in your family. By making lifestyle changes and taking the right type of medication, you can help manage your high triglyceride levels and prevent further complications. How we reviewed this article: Share this article More in What to Know About Familial Chylomicronemia Syndrome (FCS) Read this next High triglycerides may lead to serious complications. Factors like an unhealthy diet, lack of exercise, and some medical conditions increase… Hyperlipidemia is abnormally high levels of fats in the blood, which include cholesterol and triglycerides. Learn about hyperlipidemia and what you… A lipid disorder means you have high levels of low-density lipoprotein (LDL) cholesterol, triglycerides, or both. Learn about prevention and treatment. OUR BRANDS
188831
https://adacomputerscience.org/concepts/boolean_construct_truth_table
Construct a truth table — Ada Computer Science Skip to main content Search Home Resources Topics Questions Projects Glossary Specifications Students Ada CS for students STEM SMART Challenges Support Teachers Ada CS for teachers Revision quizzes Suggested teaching order Online courses Mentoring programme Support Contact us Sign up Log in Research We record your use of this site and the information you enter to support research into online learning at the University of Cambridge and the Raspberry Pi Foundation. Full details are in the privacy policy. Got it Home Concepts Construct a truth table Construct a truth table A truth table is a way of summarising and checking the logic of a circuit. The table shows all possible combinations of inputs and, for each combination, the output that the circuit will produce. You can produce truth tables for parts of a circuit to check the logic at any stage. You can also use a truth table to find a simpler set of logic to represent a circuit, although you are more likely to be asked to do this using a Boolean expression. Core Part A How many rows in a truth table? Before you draw a truth table, it is useful to work out how many rows you will need for the values depending on the number of inputs. If a circuit has 2 inputs, there are 4 possible combinations of the inputs. This is because in a digital circuit, an input can be only 1 or 0. Each unique combination is shown in the table below, with the inputs labelled A and B. | Inputs | | | | A | B | | 0 | 0 | | 0 | 1 | | 1 | 0 | | 1 | 1 | When you draw a truth table, there is a useful technique for populating the rows and columns with 1 s and 0 s. In the rightmost column (in this case, the second column), fill the column with alternating 0 s and 1 s. | Inputs | | | | A | B | | | 0 | | | 1 | | | 0 | | | 1 | In the next column to the left (in this case, the first column), fill the column with alternating pairs of 0 s and 1 s. | Inputs | | | | A | B | | 0 | 0 | | 0 | 1 | | 1 | 0 | | 1 | 1 | Now you have all of your input combinations. In general, if you have n inputs, there will be 2 n combinations. In the example above, there are 2 inputs and therefore 2 2 = 4 combinations. If there are 3 inputs, there will be 2 3 = 8 combinations. We will use the same technique to create a truth table for a circuit with 3 inputs labelled A, B and C. First calculate how many input combinations you need. Here you will need 8 rows after the column headings because 2 3 = 8 combinations. Then draw a column labelled with each of the inputs with the required number of rows (as shown below). | Inputs | | | | A | B | C | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the rightmost column (in this case the 3rd column), fill the column with alternating 0 s and 1 s. | Inputs | | | | A | B | C | | | | 0 | | | | 1 | | | | 0 | | | | 1 | | | | 0 | | | | 1 | | | | 0 | | | | 1 | In the next column to the left (in this case the 2nd column), fill the column with alternating pairs of 0 s and 1 s. | Inputs | | | | A | B | C | | | 0 | 0 | | | 0 | 1 | | | 1 | 0 | | | 1 | 1 | | | 0 | 0 | | | 0 | 1 | | | 1 | 0 | | | 1 | 1 | In the next column to the left (in this case the first column), fill the column with alternating quadruples of 0 s and 1 s. | Inputs | | | | A | B | C | | 0 | 0 | 0 | | 0 | 0 | 1 | | 0 | 1 | 0 | | 0 | 1 | 1 | | 1 | 0 | 0 | | 1 | 0 | 1 | | 1 | 1 | 0 | | 1 | 1 | 1 | Now you have all of your input combinations for 3 inputs. Try it yourself! How many rows for the input combinations would you need in a truth table with five inputs? Click a button to show the answer What is your level of confidence that your own answer is correct? Low Medium High Core Part B Construct a truth table from a Boolean expression Consider the Boolean expression (A OR B) AND (NOT B). How would you create a truth table to represent it? The video below shows how to create a truth table from this expression. Embedded YouTube video ( Using the same Boolean expression as above, (A OR B) AND (NOT B), we can go through how to create a truth table that represents it in more detail. Firstly, start by putting in a column for both the inputs A, B. As discussed in the section above, if you have n inputs, there will be 2 n combinations. Since there are 2 inputs, that means you need to add enough rows after the column headings to handle 2 2 = 4 combinations. Now you can populate the table for inputs A, B using the technique shown in the section above. | Inputs | Output | --- | | | | A | B | | 0 | 0 | | | 0 | 1 | | | 1 | 0 | | | 1 | 1 | | It can be helpful to add the columns for each part of the statement in the same order that the Boolean expression will be evaluated for when you perform the final output. Remember that the order of operator precedence is: Brackets, NOT, AND, OR. Since the first part of the statement has brackets, this part will be evaluated first so we will add A OR B to the table as the next column. Notice we haven't added the brackets at this point since this is a single statement and (A OR B) is equivalent to A OR B. | Inputs | Output | --- | | | | A | B | A OR B | | | 0 | 0 | | | | 0 | 1 | | | | 1 | 0 | | | | 1 | 1 | | | The next part of the statement that will be evaluated is the second set of brackets (NOT B). Add this to the table as the next column. | Inputs | Output | --- | | | | A | B | A OR B | NOT B | | | 0 | 0 | | | | | 0 | 1 | | | | | 1 | 0 | | | | | 1 | 1 | | | | The last column is the final output. Here we will write the full Boolean expression. | Inputs | Output | --- | | | | A | B | A OR B | NOT B | (A OR B) AND NOT B | | 0 | 0 | | | | | 0 | 1 | | | | | 1 | 0 | | | | | 1 | 1 | | | | Now using the inputs in each row, you can work out the results of each column one at a time, from left to right. Start by writing down the results of A OR B in the appropriate column. | Inputs | Output | --- | | | | A | B | A OR B | NOT B | (A OR B) AND NOT B | | 0 | 0 | 0 | | | | 0 | 1 | 1 | | | | 1 | 0 | 1 | | | | 1 | 1 | 1 | | | Next, you do the same for NOT B by reversing the input B in each row. | Inputs | Output | --- | | | | A | B | A OR B | NOT B | (A OR B) AND NOT B | | 0 | 0 | 0 | 1 | | | 0 | 1 | 1 | 0 | | | 1 | 0 | 1 | 1 | | | 1 | 1 | 1 | 0 | | Finally, fill in the values for the output column after performing a logical AND of the previous two columns. You can now see the inputs required to produce a final output of 1 which is highlighted below. | Inputs | Output | --- | | | | A | B | A OR B | NOT B | (A OR B) AND NOT B | | 0 | 0 | 0 | 1 | 0 | | 0 | 1 | 1 | 0 | 0 | | 1 | 0 | 1 | 1 | 1 | | 1 | 1 | 1 | 0 | 0 | Try it yourself! Create and complete the truth table for the following expression: Q = (NOT A) AND (B AND C) Click a button to show the answer What is your level of confidence that your own answer is correct? Low Medium High In the question above, you can see that the truth table reflects the logic of the expression. There is only one combination of inputs (highlighted in the table) that will produce an output of 1. Advanced Part C Construct a truth table from a complex expression Consider the following Boolean expression: Q=(A∨B)∧(¬C∧A) From this expression we can work out that: The truth table will need 8 rows (2 3=8) for the input combinations because there are 3 inputs (A, B, C) The truth table will need 7 columns: There are 3 inputs in the expression (A, B, C) There are 4 Boolean operators in the expression (two ANDs, one OR, and one NOT) You will also need a few extra rows for the column headers. Start by drawing the table. It is conventional to put the inputs into the first columns (working from the left). Next, you have 2 parts of the expression that are in brackets. These must be evaluated first, before the AND operation that links the two parts. The order in which you evaluate the 2 parts in brackets is not important. A∨B is straightforward. However, before you can evaluate ¬C∧A, you must evaluate ¬C. Finally, you can evaluate the AND operation that links the 2 parts in brackets. The truth table below presents the operations in a logical order. | Inputs | | Output Q | --- | | | A | B | C | A∨B | ¬C | ¬C∧A | (A∨B)∧(¬C∧A) | | 0 | 0 | 0 | | | | | | 0 | 0 | 1 | | | | | | 0 | 1 | 0 | | | | | | 0 | 1 | 1 | | | | | | 1 | 0 | 0 | | | | | | 1 | 0 | 1 | | | | | | 1 | 1 | 0 | | | | | | 1 | 1 | 1 | | | | | Try it yourself! Fill in the truth table for the expression Q=(A∨B)∧(¬C∧A). Click a button to show the answer What is your level of confidence that your own answer is correct? Low Medium High If you look at the last 2 columns, you will see that they are identical. This is an example of where you can spot, from the truth table, that an expression can be simplified. Sometimes, a simpler expression will implement the same logic as a more complex design. Here the expression ¬C∧A has the same logic as the expression (A∨B)∧(¬C∧A). You can also use Boolean algebra or a Karnaugh map to simplify an expression. Core Part D Construct a truth table from a circuit diagram When you are given a circuit diagram, you will need to break down the logic for each gate that is present in the circuit. Consider the diagram in Figure 1: Figure 1: A circuit diagram with 2 logic gates Try it yourself! How many rows for the inputs will be needed for the truth table for this circuit? Click a button to show the answer What is your level of confidence that your own answer is correct? Low Medium High The next thing to work out is how many columns will be needed. In the example circuit diagram, there are 2 logic gates, AND and OR. The number of columns needed is equal to the number of inputs + the number of logic gates. The truth table for this circuit diagram will, therefore, have a total of 5 columns along with 8 rows for the inputs (and a few extra rows for the column headers), as shown below. | Inputs | | Output | --- | | | A | B | C | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before you can fill in the truth table, you need to determine the logic statement that defines the output for each gate: in addition to A, B, C, these will be the column headings. The AND gate has inputs B, C so its output can be defined as B AND C The OR gate takes input A and the output from the AND gate, so the heading for the final output is (B AND C) OR A Complete the truth table | Inputs | | Output (Q) | --- | | | A | B | C | B AND C | A OR (B AND C) | | 0 | 0 | 0 | 0 | 0 | | 0 | 0 | 1 | 0 | 0 | | 0 | 1 | 0 | 0 | 0 | | 0 | 1 | 1 | 1 | 1 | | 1 | 0 | 0 | 0 | 1 | | 1 | 0 | 1 | 0 | 1 | | 1 | 1 | 0 | 0 | 1 | | 1 | 1 | 1 | 1 | 1 | Advanced Part E Construct a truth table from a complex diagram The circuit diagram in Figure 2 is slightly more complex than in the previous section. This time there are 3 inputs and 3 logic gates. Figure 2: A circuit diagram with 3 logic gates From this diagram we can work out that: The truth table will need 8 rows (2 3=8) for the input combinations because there are 3 inputs (A, B, C) The truth table will need 6 columns: There are 3 inputs in the expression (A, B and C) There are 3 logic gates in the diagram (AND, OR, NOT) You will also need a few extra rows for the column headers. Before you can fill in the truth table, you need to determine the logic statement that defines the output for each gate. For example, the AND gate has inputs A, B so its output can be defined as A∧B. You can keep track of the output of each gate by writing the statements on the lines in the circuit diagram. In Figure 3, the diagram includes the written statements for each gate. Figure 3: Annotated circuit The expression for this circuit diagram is: Q = (A AND B) OR (NOT C) Using symbols the expression becomes: Q = (A∧B)∨(¬C) Now you can write these statements as column headings in the order that they will be evaluated based on the order of operator precedence. | Inputs | | Output (Q) | --- | | | A | B | C | A∧B | ¬C | (A∧B)∨(¬C) | | 0 | 0 | 0 | 0 | 1 | 1 | | 0 | 0 | 1 | 0 | 0 | 0 | | 0 | 1 | 0 | 0 | 1 | 1 | | 0 | 1 | 1 | 0 | 0 | 0 | | 1 | 0 | 0 | 0 | 1 | 1 | | 1 | 0 | 1 | 0 | 0 | 0 | | 1 | 1 | 0 | 1 | 1 | 1 | | 1 | 1 | 1 | 1 | 1 | 1 | Core Part F Construct a truth table from a problem statement If you are given a problem statement and asked to draw a truth table, you will need to first work out the Boolean expression and then produce the truth table as described in the sections above. Consider the following problem statement: Harriet is looking to download an app on her device. In order to download a new app the following requirements need to be satisfied: The device's storage(S) must not be full. The app needs to be free (F) or the app needs to have been purchased (P). The result of the requirements being fulfilled will be: The download D should be 1 if Harriet can download the app. Step 1: Create the expression Remember that when you write an expression, you must focus only on the logic that produces an output value of 1. When you think that the logic is correct, you can make the truth table and double-check your logic. Identify and label the inputs There are 3 inputs for this scenario: The device's storage (S) must not be full. If the storage is full (1) then the final output should be 0 The app being downloaded needs to be either free (F) or needs to be purchased (P) Identify and label the output The output will be whether the app can be downloaded (D). Break down the logic from the problem statement The download (D) should be when: The storage (S) is NOT full AND The app is free (F) OR The app has been purchased (P) Write the expression From the breakdown of the problem statement, the expression is: D = NOT S AND (F OR P) Using symbols, the expression is: D=¬S∧(F∨P) Notice the use of brackets due to the order of operator precedence. If the brackets were not used, then the expression would be evaluated as (NOT S AND F) OR P since AND is evaluated before OR. In this case, the download would occur if the app has been purchased (P), regardless of whether the storage (S) is full or not. Step 2: Create the truth table Now we have the expression we can create the truth table. | Inputs | | Output | --- | | | S | F | P | ¬S | F∨P | ¬S∧(F∨P) | | 0 | 0 | 0 | 1 | 0 | 0 | | 0 | 0 | 1 | 1 | 1 | 1 | | 0 | 1 | 0 | 1 | 1 | 1 | | 0 | 1 | 1 | 1 | 1 | 1 | | 1 | 0 | 0 | 0 | 0 | 0 | | 1 | 0 | 1 | 0 | 1 | 0 | | 1 | 1 | 0 | 0 | 0 | 0 | | 1 | 1 | 1 | 0 | 0 | 0 | Here you can see that the truth table reflects the logic of the problem. The combinations highlighted with an output of 1 will allow Harriet to download the app when the storage is not full and either the app is free or it has been purchased. All other combinations reflect a situation where the app would not be downloaded. Related questions On your specification: Complete truth table for logic circuit 1 Complete truth table for logic circuit 2 Complete truth table for logic circuit 3 Complete truth table of expression 1 Complete truth table of expression 2 Truth table for expression Truth table to match problem 1 Truth table to match problem 2 Outside your specification: Complete truth table for logic circuit 1 Complete truth table for logic circuit 2 Complete truth table for logic circuit 3 Complete truth table of expression 1 Complete truth table of expression 2 Truth table for expression Truth table to match problem 1 Truth table to match problem 2 Outside your specification: About us Contact us Cookie policy Terms of use Privacy policy Access­ibility statement Get social All teaching materials on this site are available under a CC BY-NC-SA 4.0 license, except where otherwise stated.
188832
https://math-drills.com/multiplication2/distributive_property_0201_001.php
Multiply 2-Digit by 1-Digit Numbers Using the Distributive Property (A) Welcome to The Multiply 2-Digit by 1-Digit Numbers Using the Distributive Property (A) Math Worksheet from the Long Multiplication Worksheets Page at Math-Drills.com. This math worksheet was created or last revised on 2023-03-10 and has been viewed 25 times this week and 703 times this month. It may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. Teachers can use math worksheets as tests, practice assignments or teaching tools (for example in group work, for scaffolding or in a learning center). Parents can work with their children to give them extra practice, to help them learn a new math skill or to keep their skills fresh over school breaks. Students can use math worksheets to master a math skill through practice, in a study group or for peer tutoring. Use the buttons below to print, open, or download the PDF version of the Multiply 2-Digit by 1-Digit Numbers Using the Distributive Property (A) math worksheet. The size of the PDF file is 20339 bytes. Preview images of the first and second (if there is one) pages are shown. If there are more versions of this worksheet, the other versions will be available below the preview images. For more like this, use the search bar to look for some or all of these keywords: mathematics, fillable, savable, saveable, distributive, property, mental, strategy, multiplication, multiplying, factors, products. Print Full Version Open Full Version Download Full Version Print Student Version Open Student Version Download Student Version The Print button initiates your browser's print dialog. The Open button opens the complete PDF file in a new browser tab. The Download button initiates a download of the PDF math worksheet. Teacher versions include both the question page and the answer key. Student versions, if present, include only the question page. ✎ This worksheet is fillable and savable. It can be filled out and downloaded or printed using the Chrome or Firefox browsers, or it can be downloaded, filled out and saved or printed in stand-along apps like Foxit Reader. View the tutorial video on Youtube Other Versions: More Long Multiplication Worksheets Copyright © 2005-2025 Math-Drills.com You may use the math worksheets on this website according to our Terms of Use to help students learn math.
188833
https://bmcpregnancychildbirth.biomedcentral.com/articles/10.1186/s12884-025-07482-7
Skip to main content BMC Pregnancy and Childbirth Download PDF Research Open access Published: Independent risk factors for placental abruption: a systematic review and meta-analysis Dexin Chen1, Xuelin Gao1, Tingyue Yang1, Xing Xin1, Guohua Wang1, Hong Wang1, Rongxia He1 & … Min Liu1 BMC Pregnancy and Childbirth volume 25, Article number: 351 (2025) Cite this article 3447 Accesses 2 Citations Metrics details Abstract Background Placental abruption is one of the most severe complications during pregnancy, and its associated risk factors remain incompletely understood and somewhat controversial. Methods This study conducted a systematic search of the PubMed, Embase, Cochrane, Web of Science, and Scopus databases to collect literature related to placental abruption, with a cutoff date of July 30, 2024. Results A total of 54 observational studies were included, covering 7,267,241 pregnant women, with 47,702 cases diagnosed with placental abruption. The study identified three categories of independent risk factors: The first category includes baseline maternal characteristics (18 items), such as maternal age ≥ 35 years, black race, low prepregnancy BMI (< 18.5 kg/m²), unmarried status, smoking during pregnancy, alcohol consumption, inadequate prenatal care (< 4 visits), marijuana use, multiple pregnancy, parity ≥ 3, anemia (hemoglobin < 11 g/dL), previous placental abruption, previous cesarean section, previous miscarriage, previous stillbirth, cervical incompetence, habitual abortions, and assisted reproductive technology. Among these, previous placental abruption (AOR = 2.72, 95% CI [2.16, 3.42]) was found to be the most significant risk factor. The second category includes pregnancy-related complications (7 items), such as preterm premature rupture of membranes, preeclampsia, small for gestational age, polyhydramnios, antepartum hemorrhage, gestational hypertension, and placenta previa. Of these, placenta previa (AOR = 7.31, 95% CI [4.78, 11.19]) was identified as the most significant risk factor. The third category consists of other independent risk factors (33 items) and protective factors (3 items). However, methodological inconsistencies and publication bias in the current studies may affect the reliability of the meta-analysis results. Conclusion This study summarizes 58 independent risk factors for placental abruption, covering various aspects such as maternal baseline characteristics and pregnancy complications. For these high-risk populations, it is essential to strengthen the frequency of prenatal check-ups, establish early warning systems, and provide targeted health guidance. Future research should further refine risk factor models and develop more targeted preventive strategies to reduce the incidence of placental abruption and improve maternal and neonatal outcomes. PROSPERO CRD42024546514. Clinical trial number Not applicable. Peer Review reports Introduction Placental abruption is a serious complication of pregnancy, with an incidence rate among pregnant women ranging from 0.6 to 1.2% . Despite its relatively low occurrence, it can lead to severe obstetric complications, including postpartum hemorrhage, peripartum hysterectomy, amniotic fluid embolism, severe respiratory distress, disseminated intravascular coagulation, renal failure, and even maternal mortality [2,3,4]. Furthermore, placental abruption negatively impacts the long-term prognosis for both mothers and newborns, increasing morbidity and mortality rates [5, 6]. The prognosis for patients with placental abruption is closely related to early accurate diagnosis and timely intervention. However, due to its nonspecific symptoms, insidious onset, and rapid progression, the rates of missed and misdiagnosis are high, making diagnosis particularly challenging. Therefore, timely identification of the risk factors for placental abruption is crucial for early diagnosis and treatment. In recent years, an increasing number of studies have focused on the potential risk factors for placental abruption, including advanced maternal age, chronic hypertension, multiparity, preeclampsia, small-for-gestational-age infants, and previous medical history . However, due to differences in study design, sample sizes, and assessment methods, the conclusions in the existing literature show considerable inconsistency. Some early studies suggested that preeclampsia is not a risk factor for placental abruption [8, 9], while more recent studies indicate that preeclampsia is actually an independent risk factor for placental abruption [10, 11]. Furthermore, women with a history of placental abruption have a significantly increased risk of recurrence [12, 13], but earlier studies failed to sufficiently confirm this, considering such a history not to be an independent risk factor . The inconsistency in these findings often stems from differences in sample selection, data analysis methods, and variable control across studies, leading to a significant misunderstanding and misapplication of risk factors, which causes confusion for clinicians in risk assessment and intervention decision-making. As a result, existing research fails to provide a unified and systematic guideline for the early prevention and management of placental abruption, highlighting the urgent need for further high-quality meta-analyses to synthesize and resolve these contradictions. Globally, the incidence of placental abruption and its associated complications pose significant challenges to maternal and neonatal health. This issue is especially acute in resource-limited regions, where the lack of effective screening, diagnosis, and treatment methods exacerbates the high incidence and mortality rates of placental abruption, further deepening the global inequities in maternal and child health [14, 15]. Therefore, identifying the independent risk factors for placental abruption not only aids in the early identification of high-risk pregnancies but also contributes to the optimization of prenatal care worldwide, reducing unnecessary maternal and neonatal health losses. This is crucial for reducing maternal mortality, improving neonatal health outcomes, and minimizing the wastage of medical resources. Although previous studies have attempted to summarize the risk factors for placental abruption, most have remained at the descriptive analysis level, without conducting in-depth statistical analysis of the inconsistent findings. Additionally, these studies lack a systematic summary of independent risk factors and fail to assess their clinical guidance value . Therefore, in response to the contradictions and shortcomings in the existing literature, this study aims to comprehensively review and analyze the independent risk factors for placental abruption through systematic review and meta-analysis, clarifying the inconsistencies in current findings. We hope that this research will provide clinicians with more reliable tools for risk factor identification, promote the application of personalized prenatal care, and ultimately contribute to reducing the incidence of placental abruption and improving maternal and neonatal health outcomes globally. Materials and methods This study was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (ISMA) guidelines and has been registered in the PROSPERO platform (registration number: CRD42024546514). Inclusion and exclusion criteria Inclusion criteria for this study require that at least one risk factor associated with placental abruption be reported. These factors may include, but are not limited to, pregnancy-related hypertension, trauma, smoking, alcohol consumption, maternal age, and parity. Additionally, the reported risk factors must be adjusted for confounding variables, with the adjusted odds ratio (AOR) provided. This requirement ensures that the included studies reflect more reliable and clinically meaningful results, rather than focusing on the impact of a single factor on placental abruption. In multivariable models, other potential confounders are controlled for, allowing the findings to more accurately identify independent risk factors. Exclusion criteria include review articles, commentaries, conference abstracts, and studies with incomplete data, unextractable data, or those that only perform univariate analysis. Search strategy A search strategy combining both subject headings and free-text terms was employed to search the PubMed, Embase, Cochrane, Web of Science, and Scopus databases, with a cutoff date of July 30, 2024. The search terms and strategy were as follows: (placental abruption OR placental abruptions OR placenta abruption OR abruptio placentae) AND (hazard OR risk factors OR risk factor OR related factors OR influence factors OR influencing factors). A detailed search strategy is provided in Supplementary Table 1. Additionally, reference lists of included studies were searched for further relevant literature. To ensure the inclusivity and broad representativeness of the studies, our literature search imposes no language restrictions. If studies in other languages are found to have potential value, we will appropriately handle them through translation resources to ensure their inclusion in the analysis. Furthermore, to mitigate publication bias, we manually searched conference abstracts and clinical trial registries, and consulted gray literature databases such as OpenGrey and the Grey Literature Report. Lastly, we engaged with experts in the field to obtain potentially unpublished research data or studies. Literature screening and data extraction Literature screening was conducted strictly according to the predefined inclusion and exclusion criteria. For each eligible study, the following information was extracted: authors, study period, country, study type, sample size, age, data source, and adjusted confounding factors. Two researchers independently performed the literature screening, initially reviewing titles and abstracts for preliminary selection, followed by a thorough reading of the full texts to exclude studies that did not meet the criteria. Ultimately, the two researchers verified the selected studies against each other. In cases of disagreement, discussions were held to reach a consensus; if consensus could not be achieved, a third-party researcher was consulted for final evaluation regarding the inclusion of the study. Quality assessment of studies The quality of the included observational studies was assessed using the Newcastle-Ottawa Scale (NOS) . This scale evaluates studies based on three aspects: selection, comparability, and outcome, with a scoring range of 0–9 points. Specific assessment criteria included: selection (representativeness of the study population, adequacy of sample size, confirmation of exposure factors, etc.), comparability (selection of control groups, control of confounding factors, etc.), and outcome (methods of outcome assessment, completeness of follow-up, etc.). Based on the scores, studies were categorized as high quality (7–9 points), moderate quality (4–6 points), or low quality (0–3 points). Statistical analysis Statistical analysis was performed using STATA software (version 16.0). Initially, descriptive statistics were conducted on the included studies to understand the study types, sample characteristics, and distribution of risk factors. For the obtained AOR and their 95% confidence intervals (CI), a meta-analysis was conducted using the “metan” command in STATA. In selecting the statistical model, we determined whether to use a fixed-effect model or a random-effects model based on the heterogeneity between studies. The fixed-effect model assumes that the true effect is the same across all studies, making it suitable for situations with low heterogeneity, and providing more precise effect estimates. In contrast, the random-effects model accounts for potential variations in the true effect between studies and is more appropriate for situations with high heterogeneity. During the analysis, we assessed heterogeneity using Cochran’ s Q test and the I² statistic. If the P-value is greater than 0.05 or the I² value is less than 50%, it indicates that heterogeneity is within an acceptable range, and the fixed-effect model was chosen for analysis; otherwise, the random-effects model was applied. To evaluate publication bias, we used funnel plots for visual analysis. The symmetry of the funnel plot typically reflects the completeness of the study results and the potential for publication bias. If the funnel plot shows asymmetry, this may indicate the presence of publication bias. In such cases, we will discuss this issue and consider its potential impact on the study findings. In all statistical analyses, a P-value of less than 0.05 was considered statistically significant. Results Overview of included literature We retrieved a total of 7,066 articles from five databases. After excluding 3,077 duplicate records, we conducted a preliminary screening of the remaining 3,759 articles based on titles and abstracts. Ultimately, we downloaded the full texts of 78 articles for further review, resulting in the inclusion of 54 studies that met the eligibility criteria for subsequent analysis. The literature screening flowchart is illustrated in Fig. 1. The included 54 studies comprised 20 cross-sectional studies, 32 retrospective cohort studies, and 2 prospective cohort studies, with a total sample size of 7,267,241 individuals, of which 47,702 were pregnant women diagnosed with placental abruption. The age of the pregnant women ranged from 13 to 53 years; however, 31 studies did not provide age information. Data for 32 studies were sourced from databases, while the remaining 22 studies were conducted in hospitals. All included studies adjusted for confounding factors; however, 11 studies did not explicitly report the specific confounding factors that were adjusted for. Basic information about the included studies is detailed in Supplementary Table 1. Quality assessment of evidence Among the 54 included studies, quality assessment scores indicated that 21 studies scored between 4 and 6 points, categorizing them as moderate quality, while 33 studies scored between 7 and 9 points, categorizing them as high quality. The average score on the Newcastle-Ottawa Scale (NOS) was 6.87 ± 1.49 points. Although there was some variability in scores across different domains, most studies performed well in the areas of “participant selection” and “control of confounding variables”. However, several studies received lower scores in the “exposure measurement” domain, suggesting potential methodological biases or inconsistencies in this field. The quality assessment results are presented in Fig. 2. The detailed scores for each study can be found in Supplementary Table 3. Meta-analysis results Risk factors associated with maternal baseline characteristics A total of 21 exposure factors related to maternal baseline characteristics were identified as potential contributors to placental abruption. Given that the I² value was less than 50% and the P-value was greater than 0.05, a fixed-effect model was employed for the meta-analysis. The results confirmed 18 factors as independent risk factors for placental abruption, specifically: maternal age ≥ 35 years, black race, low prepregnancy BMI (< 18.5 kg/m²), unmarried status, smoking during pregnancy, alcohol consumption, inadequate prenatal care (< 4 visits), marijuana use, multiple pregnancies, parity ≥ 3, anemia (hemoglobin < 11 g/dL), previous placental abruption, previous cesarean section, previous miscarriage, previous stillbirth, cervical incompetence, habitual abortions, and assisted reproductive technology. Among these, previous placental abruption (AOR = 2.72, 95% CI [2.16, 3.42]) was identified as the most significant risk factor. Three exposure factors—maternal age < 20 years, loss of employment, and multiparity—did not show a significant association with placental abruption. Detailed results are presented in Fig. 3. Risk factors associated with maternal pregnancy complications Eight exposure factors related to maternal pregnancy complications were identified as potential contributors to placental abruption. With an I² value of less than 50% and a P-value greater than 0.05, a fixed-effect model was again utilized for the meta-analysis. The results indicated that, except for gestational diabetes, which showed no significant association with placental abruption, the remaining seven pregnancy complications were identified as independent risk factors. These include: preterm premature rupture of membranes, preeclampsia, small for gestational age, polyhydramnios, antepartum hemorrhage, gestational hypertension, and placenta previa. Among these, placenta previa (AOR = 7.31, 95% CI [4.78, 11.19]) was recognized as the most significant risk factor. Detailed results are shown in Fig. 4. Risk factors not subject to data pooling A total of 36 factors associated with the occurrence of placental abruption were reported in single studies only, and thus were not included in the meta-analysis. Among these, three were identified as protective factors: folic acid, multivitamins, and the combination of folic acid and multivitamins. The dosage of folic acid was 0.4 mg per day, while the dosage of vitamins was not clearly reported. The remaining 33 factors were recognized as independent risk factors for placental abruption, with hyperthyroidism, uterine malformation, and preterm uterine contractions having the highest AOR values, making them the most significant risk factors for placental abruption. Detailed information is provided in Table 1. Subgroup analysis We conducted a subgroup analysis of various risk factors based on study type, with the criterion that the number of studies in each subgroup was ≥ 2 to meet the requirements for meta-analysis. Out of 58 exposure factors, 10 risk factors met this criterion. The results of the subgroup analysis showed that, with the exception of preterm premature rupture of membranes, the AORs for the other nine risk factors were higher in case-control studies compared to cohort studies. Additionally, heterogeneity between studies was reduced within the subgroups. For further details, see Table 2. Publication bias For exposure factors with more than 10 studies, funnel plots were generated to assess publication bias. The results indicated that the funnel plots for maternal age ≥ 35 years, smoking during pregnancy, gestational hypertension, and preeclampsia exhibited asymmetry, suggesting a potential risk of publication bias in the current research. Detailed findings are illustrated in Fig. 5. Discussion Placental abruption is a significant contributor to maternal morbidity and perinatal mortality. This study compiles the largest sample size to date, aiming to comprehensively summarize the independent risk factors associated with placental abruption. Currently, several risk factors associated with placental abruption can be categorized as related to the baseline characteristics of the patient. Prevention of placental abruption should focus on managing modifiable behavioral factors, with particular attention to smoking (OR = 1.84, 95% CI: 1.74–1.95) and alcohol use (OR = 1.27, 95% CI: 1.11–1.45), both of which can lead to pathological changes in the placenta. Smoking is associated with elevated plasma homocysteine levels, which in turn can cause endothelial cell damage and local thrombosis . Additionally, the vasoconstrictive effects of nicotine and carbon monoxide, as well as hypoxic conditions, may lead to placental infarction and increased risk of arterial rupture, thereby triggering placental abruption [19,20,21]. Alcohol, another modifiable risk factor, also significantly increases the incidence of placental abruption. Alcohol easily crosses the placenta and accumulates in the fetus and amniotic fluid, disrupting the hormonal balance between the fetus and the mother. It may also cause constriction of placental and umbilical blood vessels, thereby increasing the risk of placental abruption . Previous studies have shown that the use of illicit drugs, particularly marijuana, significantly increases the risk of placental abruption [23, 24], a finding consistent with our results. Therefore, future efforts should focus on establishing a prenatal risk prevention and control system that includes personalized behavioral interventions for smoking, alcohol abuse, and drug misuse, along with nutritional support and psychological counseling to improve the placental microenvironment. In fact, smoking, alcohol use, and cocaine use are more common among Black women than White women, which may help explain the higher rate of placental abruption observed in Black women in this study [25, 26]. Furthermore, other risk factors such as inadequate prenatal care, low pre-pregnancy body mass index, anemia, and being unmarried can be mitigated through public health interventions such as education and guidance during pregnancy to reduce the risk of placental abruption. For existing risk factors—such as those reported in this study, including Black race, multiple pregnancies, parity ≥ 3, previous placental abruption, prior cesarean section, previous miscarriage, previous stillbirth, cervical incompetence, habitual abortions, and assisted reproductive technology—while they may be difficult to avoid, they should be closely monitored by healthcare providers and patients alike. Increased screening efforts are essential for timely identification and management of potential adverse outcomes. Previous placental abruption may lead to persistent damage to the decidual basalis layer, with the underlying mechanism involving vascular fibrosis mediated by the TGF-β signaling pathway. This process significantly weakens the adhesion between the placenta and the uterine wall. Concurrently, the tendency for thrombosis, such as elevated antiphospholipid antibody levels, may exacerbate the risk of local microthrombosis in the placenta . Therefore, women with a history of placental abruption should remain vigilant regarding the increased risk of placental abruption in future pregnancies. Identifying and implementing preventive measures for risk factors associated with pregnancy complications is a crucial strategy for avoiding placental abruption. Hypertensive disorders of pregnancy, including gestational hypertension and preeclampsia, have been widely recognized as significant risk factors for placental abruption [11, 28]. Hypertension can lead to arteriosclerosis of small arteries at the site of placental attachment, resulting in ischemia, necrosis, or even rupture and hematoma formation in distal capillaries. These conditions may compress the placenta and lead to its premature detachment . Once placental abruption occurs, it typically presents as widespread and rapidly progressing, making it prone to misdiagnosis. Therefore, for patients with gestational hypertension, symptoms such as bleeding, uterine tenderness, or frequent contractions should raise a high suspicion for the possibility of placental abruption . Premature rupture of membranes is also considered a risk factor for placental abruption, potentially related to intrauterine infection and sudden decreases in uterine pressure. Premature rupture can lead to a sharp decline in intrauterine pressure, triggering the separation of the decidua from the membranes, activating prostaglandin factors, and resulting in uterine contractions and placental abruption . Additionally, polyhydramnios can increase intrauterine pressure, affecting placental blood flow and further elevating the risk of placental abruption. If a pregnant woman with polyhydramnios experiences membrane rupture, the sudden drop in intrauterine pressure may lead to misalignment and detachment of the uterine wall from the placenta, exacerbating the occurrence of placental abruption [1, 32, 33]. Previous studies have indicated that maternal and fetal circulatory changes due to hypoxia, uteroplacental vascular dysfunction, and placental ischemia are major pathophysiological mechanisms underlying placental abruption [34, 35]. Consequently, placental abruption is viewed as a long-term chronic condition originating in early pregnancy . Based on this understanding, conditions such as small-for-gestational-age infants and preeclampsia can be associated with placental abruption through the pathophysiological mechanisms of ischemia and hypoxia, thereby becoming risk factors [37,38,39]. Furthermore, antepartum hemorrhage may lead to vascular damage and localized inflammatory responses within the uterus, affecting the attachment and stability of the placenta, thus increasing the risk of detachment. For placenta previa, the associated risk arises from the unique anatomical characteristics of the lower uterine segment. This region has a thinner myometrium and poorly developed spiral arteries, leading to inadequate placental blood supply. Additionally, the abnormally implanted placental villi degrade the extracellular matrix via MMP-9-mediated processes, further compromising the stability of placental attachment . These findings suggest that a dynamic monitoring system should be established for pregnant women with these high-risk factors. This could include a combination of ultrasound-based placental morphological assessment and uterine artery blood flow Doppler studies to enable early risk detection. In addition to the aforementioned risk factors, this study summarizes 36 independent factors that were reported in only single studies. Among these, three were identified as protective factors: folic acid, multivitamins, and a combination of folic acid and multivitamins. Folic acid improves placental endothelial function by promoting DNA methylation and reducing homocysteine levels , while vitamins, through their antioxidant properties (such as vitamins C and E) and immune modulation (such as vitamin D), can help suppress placental oxidative stress and inflammatory responses [41, 42]. The remaining 33 are independent risk factors. These factors encompass various domains, including psychological and emotional aspects, reproductive history and physiology, pregnancy-related factors, lifestyle and environmental influences, nutrition, and monitoring indicators. For instance, pregnant women with anxiety or schizophrenia exhibit significantly elevated levels of plasma cortisol, corticotropin-releasing hormone, and serotonin. This abnormal increase in hormone levels may lead to inflammatory responses at the maternal-fetal interface, thereby increasing the risk of placental abruption . Additionally, pregnant women with hyperthyroidism often present symptoms such as excessive sweating, heat intolerance, insomnia, and palpitations, which may result in impaired placental function in late pregnancy, subsequently leading to adverse outcomes such as placental abruption . Additionally, uterine malformations can increase the risk of placental abruption by restricting the placental attachment area and disrupting decidual vascular remodeling, leading to inadequate basal plate perfusion and abnormal shear forces [45, 46]. Although uterine malformations have been reported in only a single study, this factor warrants further attention. Although these risk factors have only been reported in individual studies, they still warrant attention. Notably, the cumulative effect of two or more risk factors can significantly elevate the incidence of placental abruption. For example, the combined effects of smoking and hypertension on the risk of placental abruption often exceed the expected risk posed by each individual factor . Therefore, many risk factors may coexist in the same patient, leading to additive effects and potentially more severe consequences. The subgroup analysis of this study revealed a significant impact of study design on the estimation of risk factors for placental abruption. Among the 10 risk factors that met the subgroup analysis criteria, the AORs in case-control studies were generally higher than those in cohort studies, with the exception of preterm premature rupture of membranes. This discrepancy may be attributed to design biases inherent in the two types of studies: case-control studies, due to their retrospective nature, are more susceptible to recall and selection biases, which may lead to an overestimation of the association between risk factors and outcomes. In contrast, cohort studies, through prospective data collection and more rigorous control of confounding variables, tend to provide more conservative effect estimates . Notably, the reduction in heterogeneity between studies following subgroup analysis suggests that stratification by study type effectively mitigated the variability caused by design differences, thereby enhancing the internal consistency of the results. These findings emphasize the need for cautious interpretation of placental abruption risk factors, considering potential biases related to study design. Additionally, this study found no significant association between certain maternal characteristics (such as young age, unemployment status, parity, and gestational diabetes) and placental abruption. However, this does not imply that these factors are completely unrelated to placental abruption. On the contrary, they should be further examined in future high-quality studies, particularly gestational diabetes. The strength of this study lies in its inclusion of multifactorial analyses, which aim to minimize the confounding effects and provide a deeper exploration of the independent risk factors for placental abruption. However, there are several limitations in this study that need to be acknowledged. First, among the 51 studies included, 21 were of moderate quality. These studies have certain limitations in design and methodology, which may lead to increased bias and heterogeneity in the results. For example, some studies had shortcomings in exposure measurement and control of confounding variables, which could affect the accurate assessment of independent risk factors for placental abruption. Nevertheless, these studies still provide important clinical insights, suggesting that future research should focus on improving study design quality to enhance the understanding of risk factors for placental abruption. Second, the lack of data on the definition, types, and severity of placental abruption limited further subgroup analyses. Moreover, when attempting subgroup analysis based on geographic region, the limited number of studies related to each risk factor made this analysis unfeasible. However, exploring regional differences in the prevalence of risk factors is crucial for the field, and future research should give this more attention. Third, despite our systematic literature search aimed at including all relevant studies, the asymmetry observed in the funnel plot suggests the potential for publication bias. This bias may arise from the absence of unpublished negative results or studies with small sample sizes, leading to inaccurate estimates of the true effects of certain independent risk factors in the meta-analysis. Therefore, future research should place more emphasis on collecting and analyzing grey literature to comprehensively evaluate the independent risk factors for placental abruption. Finally, the inclusion criteria of this study required that the included studies report AORs for confounding variables. While this decision improved the internal validity of the results, it may have introduced selection bias. Studies that did not report confounder-adjusted AORs were often from low-resource settings or were observational studies using univariate analyses. The exclusion of these studies may have led to an overestimation of the effect size of certain risk factors and an underestimation of the role of factors unique to resource-limited environments. Conclusion Placental abruption is one of the most severe complications during pregnancy, with risk factors spanning multiple domains. This study identified 18 risk factors associated with maternal baseline characteristics, 7 risk factors related to pregnancy complications, and 33 independent risk factors reported in only a single study. Based on these findings, clinical practice should enhance screening and intervention for high-risk pregnant women, particularly through the early identification and management of known risk factors. Future research should focus on exploring the interactions between these risk factors and developing more precise clinical interventions to reduce the incidence of placental abruption. Additionally, research on individualized risk assessment models is recommended to guide more effective prevention and treatment strategies in clinical practice, thereby improving maternal and neonatal health outcomes. Data availability All data generated or analysed during this study are included in this published article and its supplementary information files. References Tikkanen M. Placental abruption: epidemiology, risk factors and consequences. Acta Obstet Gynecol Scand. 2011;90(2):140–9. Article PubMed Google Scholar 2. Riihimäki O, Paavonen J, Luukkaala T, Gissler M, Metsäranta M, Andersson S, et al. Mortality and causes of death among women with a history of placental abruption. Acta Obstet Gynecol Scand. 2017;96(11):1315–21. Article PubMed Google Scholar 3. Downes KL, Grantz KL, Shenassa ED, Maternal. Labor, delivery, and perinatal outcomes associated with placental abruption: A systematic review. Am J Perinatol. 2017;34(10):935–57. Article PubMed PubMed Central Google Scholar 4. Ushiro S, Suzuki H, Ueda S. Japan obstetric compensation system for cerebral palsy: strategic system of data aggregation, investigation, amelioration and no-fault compensation. J Obstet Gynaecol Res. 2019;45(3):493–513. Article PubMed Google Scholar 5. Okoth K, Chandan JS, Marshall T, Thangaratinam S, Thomas GN, Nirantharakumar K, et al. Association between the reproductive health of young women and cardiovascular disease in later life: umbrella review. BMJ (Clinical Res ed). 2020;371:m3502. Google Scholar 6. Downes KL, Shenassa ED, Grantz KL. Neonatal outcomes associated with placental abruption. Am J Epidemiol. 2017;186(12):1319–28. Article PubMed PubMed Central Google Scholar 7. Ananth CV, Lavery JA, Vintzileos AM, Skupski DW, Varner M, Saade G, et al. Severe placental abruption: clinical definition and associations with maternal complications. Am J Obstet Gynecol. 2016;214(2):272. e1-.e9. Article Google Scholar 8. Tikkanen M, Hämäläinen E, Nuutila M, Paavonen J, Ylikorkala O, Hiilesmaa V. Elevated maternal second-trimester serum alpha-fetoprotein as a risk factor for placental abruption. Prenat Diagn. 2007;27(3):240–3. Article CAS PubMed Google Scholar 9. Shek Jwn S, Pl K, Lt. Wong Sf. Incidence, risk factors, and clinical outcomes of placental abruption in a tertiary hospital in Hong Kong: a retrospective case-control study. Hong Kong J Gynecol Obstet Midwifery. 2023;23(1). 10. Kotani T, Imai K, Ushida T, Moriyama Y, Nakano-Kobayashi T, Osuka S, et al. Pregnancy outcomes in women with thyroid diseases. Jma J. 2022;5(2):216–23. PubMed PubMed Central Google Scholar Anderson E, Raja EA, Shetty A, Gissler M, Gatt M, Bhattacharya S, et al. Changing risk factors for placental abruption: A case crossover study using routinely collected data from Finland, Malta and Aberdeen. PLoS ONE. 2020;15(6):e0233641. Article CAS PubMed PubMed Central Google Scholar 12. Kyozuka H, Murata T, Fukusda T, Yamaguchi A, Kanno A, Yasuda S, et al. Teenage pregnancy as a risk factor for placental abruption: findings from the prospective Japan environment and children’s study. PLoS ONE. 2021;16(5):e0251428. Article CAS PubMed PubMed Central Google Scholar 13. Macheku GS, Philemon RN, Oneko O, Mlay PS, Masenga G, Obure J, et al. Frequency, risk factors and feto-maternal outcomes of abruptio placentae in Northern Tanzania: a registry-based retrospective cohort study. BMC Pregnancy Childbirth. 2015;15:242. Article PubMed PubMed Central Google Scholar 14. Liang X, Lyu Y, Li J, Li Y, Chi C. Global, regional, and National burden of preterm birth, 1990–2021: a systematic analysis from the global burden of disease study 2021. EClinicalMedicine. 2024;76:102840. Article PubMed PubMed Central Google Scholar 15. Bączkowska M, Kosińska-Kaczyńska K, Zgliczyńska M, Brawura-Biskupski-Samaha R, Rebizant B, Ciebiera M, Epidemiology. Risk factors, and perinatal outcomes of placental Abruption-Detailed annual data and clinical perspectives from Polish tertiary center. Int J Environ Res Public Health. 2022;19(9). 16. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ (Clinical Res ed). 2021;372:n71. Google Scholar 17. Stang A. Critical evaluation of the Newcastle-Ottawa scale for the assessment of the quality of nonrandomized studies in meta-analyses. Eur J Epidemiol. 2010;25(9):603–5. Article PubMed Google Scholar 18. Ray JG, Laskin CA. Folic acid and homocyst(e)ine metabolic defects and the risk of placental abruption, pre-eclampsia and spontaneous pregnancy loss: A systematic review. Placenta. 1999;20(7):519–29. Article CAS PubMed Google Scholar 19. Naeye RL. Abruptio placentae and placenta previa: frequency, perinatal mortality, and cigarette smoking. Obstet Gynecol. 1980;55(6):701–4. CAS PubMed Google Scholar 20. La Verde M, Torella M, Ronsini C, Riemma G, Cobellis L, Marrapodi MM, et al. The association between fetal doppler and uterine artery blood volume flow in term pregnancies: a pilot study. Ultraschall Med. 2024;45(2):184–9. Article PubMed Google Scholar 21. Ghosh G, Breborowicz A, Brazert M, Maczkiewicz M, Kobelski M, Dubiel M, et al. Evaluation of third trimester uterine artery flow velocity indices in relationship to perinatal complications. J Maternal-Fetal Neonatal Med. 2006;19(9):551–5. Article CAS Google Scholar 22. Lintao RCV, Kammala AK, Vora N, Yaklic JL, Menon R. Fetal membranes exhibit similar nutrient transporter expression profiles to the placenta. Placenta. 2023;135:33–42. Article PubMed Google Scholar 23. McDonald SD, Vermeulen MJ, Ray JG. Risk of fetal death associated with maternal drug dependence and placental abruption: a population-based study. J Obstet Gynaecol Can. 2007;29(7):556–9. Article PubMed Google Scholar 24. Hoskins IA, Friedman DM, Frieden FJ, Ordorica SA, Young BK. Relationship between antepartum cocaine abuse, abnormal umbilical artery doppler velocimetry, and placental abruption. Obstet Gynecol. 1991;78(2):279–82. CAS PubMed Google Scholar 25. Ananth CV, Oyelese Y, Yeo L, Pradhan A, Vintzileos AM. Placental abruption in the united States, 1979 through 2001: Temporal trends and potential determinants. Am J Obstet Gynecol. 2005;192(1):191–8. Article PubMed Google Scholar 26. Saftlas AF, Olson DR, Atrash HK, Rochat R, Rowley D. National trends in the incidence of abruptio placentae, 1979–1987. Obstet Gynecol. 1991;78(6):1081–6. CAS PubMed Google Scholar 27. Tikkanen M, Nuutila M, Hiilesmaa V, Paavonen J, Ylikorkala O. Prepregnancy risk factors for placental abruption. Acta Obstet Gynecol Scand. 2006;85(1):40–4. Article PubMed Google Scholar 28. Schur E, Baumfeld Y, Rotem R, Weintraub AY, Pariente G. Placental abruption: assessing trends in risk factors over time. Arch Gynecol Obstet. 2022;306(5):1547–54. Article PubMed Google Scholar 29. Hasegawa J, Nakamura M, Hamada S, Ichizuka K, Matsuoka R, Sekizawa A, et al. Capable of identifying risk factors for placental abruption. J Maternal-Fetal Neonatal Med. 2014;27(1):52–6. Article Google Scholar 30. Brandt JS, Ananth CV. Placental abruption at near-term and term gestations: pathophysiology, epidemiology, diagnosis, and management. Am J Obstet Gynecol. 2023;228(5s):S1313–29. Article PubMed PubMed Central Google Scholar 31. Xu L, Yang T, Wen M, Wen D, Jin C, An M et al. Frontiers in the Etiology and Treatment of Preterm Premature Rupture of Membrane: From Molecular Mechanisms to Innovative Therapeutic Strategies. Reproductive sciences (Thousand Oaks, Calif). 2024;31(4):917– 31. 32. Hung TH, Hsieh CC, Hsu JJ, Lo LM, Chiu TH, Hsieh TT. Risk factors for placental abruption in an Asian population. Reproductive sciences (Thousand Oaks, Calif). 2007;14(1):59–65. 33. Romero R, Kusanovic JP, Chaiworapongsa T, Hassan SS. Placental bed disorders in preterm labor, preterm PROM, spontaneous abortion and abruptio placentae. Best Pract Res Clin Obstet Gynecol. 2011;25(3):313–27. Article Google Scholar 34. Ananth CV, Oyelese Y, Srinivas N, Yeo L, Vintzileos AM. Preterm premature rupture of membranes, intrauterine infection, and oligohydramnios: risk factors for placental abruption. Obstet Gynecol. 2004;104(1):71–7. Article PubMed Google Scholar 35. Ananth CV, Savitz DA, Luther ER. Maternal cigarette smoking as a risk factor for placental abruption, placenta previa, and uterine bleeding in pregnancy. Am J Epidemiol. 1996;144(9):881–9. Article CAS PubMed Google Scholar 36. Ananth CV, Oyelese Y, Prasad V, Getahun D, Smulian JC. Evidence of placental abruption as a chronic process: associations with vaginal bleeding early in pregnancy and placental lesions. Eur J Obstet Gynecol Reprod Biol. 2006;128(1–2):15–21. Article PubMed Google Scholar 37. Ananth CV, Peltier MR, Chavez MR, Kirby RS, Getahun D, Vintzileos AM. Recurrence of ischemic placental disease. Obstet Gynecol. 2007;110(1):128–33. Article PubMed Google Scholar 38. Ananth CV, Vintzileos AM. Maternal-fetal conditions necessitating a medical intervention resulting in preterm birth. Am J Obstet Gynecol. 2006;195(6):1557–63. Article PubMed Google Scholar 39. Rasmussen S, Irgens LM, Dalaker K. A history of placental dysfunction and risk of placental abruption. Paediatr Perinat Epidemiol. 1999;13(1):9–21. Article CAS PubMed Google Scholar 40. Zhou R, Zhe L, Chen F, Gao T, Zhang X, Huang L, et al. Maternal folic acid and vitamin B(12) supplementation during medium to late gestation promotes fetal development via improving placental antioxidant capacity, angiogenesis and amino acid transport. J Sci Food Agric. 2024;104(5):2832–41. Article CAS PubMed Google Scholar 41. Sebastiani G, Navarro-Tapia E, Almeida-Toledano L, Serra-Delgado M, Paltrinieri AL, García-Algar Ó et al. Effects of antioxidant intake on fetal development and maternal/neonatal health during pregnancy. Antioxid (Basel). 2022;11(4). 42. Prins JR, Schoots MH, Wessels JI, Campmans-Kuijpers MJE, Navis GJ, van Goor H, et al. The influence of the dietary exposome on oxidative stress in pregnancy complications. Mol Aspects Med. 2022;87:101098. Article CAS PubMed Google Scholar 43. László KD, Ananth CV, Wikström AK, Svensson T, Li J, Olsen J, et al. Loss of a close family member the year before or during pregnancy and the risk of placental abruption: a cohort study from Denmark and Sweden. Psychol Med. 2014;44(9):1855–66. Article PubMed Google Scholar 44. Xu Y, Li C, Wang W, Yu X, Liu A, Shi Y, et al. Gestational and postpartum complications in patients with first trimester thyrotoxicosis: A prospective multicenter cohort study from Northeast China. Thyroid. 2023;33(6):762–70. Article CAS PubMed Google Scholar 45. Ono S, Kuwabara Y, Matsuda S, Yonezawa M, Watanabe K, Akira S, et al. Is hysteroscopic metroplasty using the incision method for septate uterus a risk factor for adverse obstetric outcomes? J Obstet Gynaecol Res. 2019;45(3):634–9. Article PubMed Google Scholar 46. De Franciscis P, Riemma G, Schiattarella A, Cobellis L, Colacurci N, Vitale SG, et al. Impact of hysteroscopic metroplasty on reproductive outcomes of women with a dysmorphic uterus and recurrent miscarriages: A systematic review and Meta-Analysis. J Gynecol Obstet Hum Reprod. 2020;49(7):101763. Article PubMed Google Scholar 47. Leunen K, Hall DR, Odendaal HJ, Grové D. The profile and complications of women with placental abruption and intrauterine death. J Trop Pediatr. 2003;49(4):231–4. Article CAS PubMed Google Scholar 48. Pérez-Guerrero EE, Guillén-Medina MR, Márquez-Sandoval F, Vera-Cruz JM, Gallegos-Arreola MP, Rico-Méndez MA et al. Methodological and statistical considerations for Cross-Sectional, Case-Control, and cohort studies. J Clin Med. 2024;13(14). Download references Acknowledgements Not applicable. Funding This research was funded by Gansu Provincial Natural Science Foundation (No. 23JRRA0990), and Science and Technology Plan of Gansu Province (No. 24ZDCA004). Author information Authors and Affiliations Department of Obstetrics, The Second Hospital of Lanzhou University, No. 82, Cuiying Gate, Chengguan District, Lanzhou City, Gansu Province, 730000, P.R. China Dexin Chen, Xuelin Gao, Tingyue Yang, Xing Xin, Guohua Wang, Hong Wang, Rongxia He & Min Liu Authors Dexin Chen View author publications Search author on:PubMed Google Scholar 2. Xuelin Gao View author publications Search author on:PubMed Google Scholar 3. Tingyue Yang View author publications Search author on:PubMed Google Scholar 4. Xing Xin View author publications Search author on:PubMed Google Scholar 5. Guohua Wang View author publications Search author on:PubMed Google Scholar 6. Hong Wang View author publications Search author on:PubMed Google Scholar 7. Rongxia He View author publications Search author on:PubMed Google Scholar 8. Min Liu View author publications Search author on:PubMed Google Scholar Contributions ML and RH designed the study framework, managed the research progress, secured funding, and revised the final manuscript. DC drafted the initial manuscript and conducted data analysis and visualization of the. XG and TY performed the literature search and selection. XX and GW carried out data extraction and analysis. HW was responsible for data validation and preservation. All authors reviewed the final version of the manuscript and provided their consent for publication. Corresponding authors Correspondence to Rongxia He or Min Liu. Ethics declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare no competing interests. Additional information Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Electronic supplementary material Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Rights and permissions Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit Reprints and permissions About this article Cite this article Chen, D., Gao, X., Yang, T. et al. Independent risk factors for placental abruption: a systematic review and meta-analysis. BMC Pregnancy Childbirth 25, 351 (2025). Received: Accepted: Published: DOI: Share this article Anyone you share the following link with will be able to read this content: Provided by the Springer Nature SharedIt content-sharing initiative Keywords Placental abruption Independent risk factors Systematic review Meta-analysis
188834
https://www-backup.salemstate.edu/swamee-jain-equation
Unveiling the Secrets of the Swamee Jain Equation for Fluid Flow Analysis - Salem State Vault Salem State Vault Disclaimer DMCA Privacy Policy Term Of Use Search Home/Ucla/Unveiling the Secrets of the Swamee Jain Equation for Fluid Flow Analysis Unveiling the Secrets of the Swamee Jain Equation for Fluid Flow Analysis Ashley July 29, 2025 8 min read In the labyrinthine corridors of fluid mechanics, the quest for precise predictive models continues to drive research, innovation, and pedagogical refinement. Among the myriad equations that have emerged over the decades, the Swamee-Jain equation stands out as a pivotal tool for engineers and scientists aiming to estimate head loss due to pipe roughness with remarkable efficiency. Its utility extends beyond mere computational convenience, touching upon fundamental concepts of turbulent flow, surface interactions, and the complex interplay between fluid velocity and channel characteristics. Recognizing its significance is not simply a matter of academic interest but a practical necessity in designing hydraulic systems—ranging from municipal water supply networks to intricate industrial piping—where understanding energy dissipation is crucial for optimization and safety. Table of Contents Deciphering the Swamee-Jain Equation: Foundation and Context Mathematical Structure and Derivation Impact on Engineering Practice and Societal Infrastructure Implications for Sustainable and Resilient Infrastructure Limitations and Opportunities for Future Research Emerging Developments in Fluid Modeling Deciphering the Swamee-Jain Equation: Foundation and Context The Swamee-Jain equation is an explicit approximation formulated to estimate the Darcy-Weisbach friction factor (f) in turbulent flow regimes within smooth and rough circular pipes. Developed in the late 20th century through empirical fitting and analytical reasoning, this equation offers a straightforward alternative to solving the implicit Colebrook-White equation, which, despite its accuracy, demands iterative numerical approaches that can be computationally intensive. The primary motivation behind the Swamee-Jain formulation was to facilitate rapid calculations without significant loss of precision, thereby streamlining the iterative processes inherent in hydraulic engineering design programs. Mathematical Structure and Derivation At its core, the Swamee-Jain equation balances empirical observations with theoretical underpinnings, resulting in a formula that approximates the Colebrook-White equation’s solutions across a broad spectrum of flow conditions. Its form is typically expressed as: f = 0.25 / [ log10( (ε / (3.7 D)) + (5.74 / Re^0.9) ) ]^2, where f represents the Darcy friction factor, ε is the pipe roughness height, D is the pipe diameter, and Re denotes the Reynolds number. This explicit relation embodies the delicate balance between turbulent flow’s inertia and viscous effects, capturing surface roughness’s influence with notable accuracy even at high Reynolds numbers. | Relevant Category | Substantive Data | --- | | Approximate Accuracy | Within 2% of Colebrook-White solutions over common engineering ranges | | Reynolds Number Range | Re > 4000, typically up to 10^8 for practical purposes | | Default Adjustments | Variants exist for different roughness conditions, including fuzzy and transitional regimes | 💡 The Swamee-Jain approach exemplifies how empirical modeling can innovate within classical fluid mechanics, offering computational swiftness without severely compromising accuracy, which is paramount when real-time calculations or large-scale simulations are necessary. Impact on Engineering Practice and Societal Infrastructure The adoption of the Swamee-Jain equation has significantly influenced contemporary hydraulic design standards and practices. Its ability to quickly approximate head loss translates into more efficient hydraulic optimization, cost-effective pipe material selection, and improved system resilience. In urban infrastructure, where millions of liters of water are transported daily, every marginal gain in predictive precision can lead to substantial savings and longevity improvements. Moreover, the equation’s simplicity reduces reliance on iterative numerical methods, democratizing access to robust hydraulic modeling, especially in resource-constrained contexts where advanced software may be unavailable. Implications for Sustainable and Resilient Infrastructure Beyond traditional applications, the Swamee-Jain equation underpins emerging trends in sustainable design, such as low-energy fluid conveyance and smart pipe networks. Its expedience supports optimization algorithms that integrate real-time data streams, fostering adaptive management of water systems amid climate variability. As cities globally confront water scarcity and infrastructure aging, such analytical tools will be central to advancing resilient, adaptive, and efficient water distribution frameworks—all rooted in an understanding of fundamental fluid flow dynamics. | Relevant Category | Substantive Data | --- | | Urban Planning | Facilitates design of cost-effective pipe networks with minimized energy consumption | | Environmental Impact | Supports calculations that optimize for lower pump energy, reducing carbon footprint | | Innovation Trend | Enables integration into smart grid algorithms for proactive leak detection and maintenance | 💡 As hydraulic systems become smarter, simplifications like the Swamee-Jain equation will underpin adaptive control algorithms—reducing energy costs and enhancing system sustainability in tandem. Limitations and Opportunities for Future Research While the Swamee-Jain equation is invaluable, it is not without limitations. Its reliance on empirical fits means that it may falter at transitional or extremely rough pipe conditions, where flow regimes deviate from turbulent norms. For laminar flow or highly irregular surfaces, alternative models or numerical approaches still hold the edge. Additionally, the advent of microfluidic devices and unconventional materials calls for further refinement of these formulas, injecting new data into the modeling landscape. Emerging Developments in Fluid Modeling Advances in computational fluid dynamics (CFD), machine learning, and high-fidelity simulations are opening new pathways for refining traditional equations. Hybrid models that combine empirical simplicity with detailed turbulence modeling promise to extend the accuracy domain of formulas like Swamee-Jain’s, especially in complex geometries or variable roughness conditions. Their integration into decision support systems could lead to smarter, more adaptable infrastructure management, echoing the evolutionary trends seen in other engineering domains. | Relevant Category | Substantive Data | --- | | Research Trend | Increasing use of AI algorithms to calibrate empirical equations dynamically | | Material Science | New surface finishes reduce ε, affecting the applicability of classical formulas | | Computational Tools | Enhanced CFD tools complement analytical methods, improving predictive fidelity | 💡 The future may see the confluence of machine learning with classical hydraulic equations, transforming them from static approximations into adaptive tools that learn and evolve with ongoing data, pushing the boundaries of fluid flow analysis. Key Points Rooted in empirical fitting, the Swamee-Jain equation offers rapid head loss estimation across a broad Reynolds number spectrum. Its simplicity facilitates integration into design software, enabling efficient system optimization and cost savings. While accurate within turbulent ranges, limitations exist at transitional or highly specialized flow conditions, inviting ongoing research. Emerging technologies will likely enhance and extend the applicability of such formulas through hybrid and AI-driven approaches. Understanding these equations' nuances enhances infrastructure resilience and promotes sustainable resource management. What is the primary advantage of using the Swamee-Jain equation over the Colebrook-White formula? + The Swamee-Jain equation provides an explicit formula for the Darcy friction factor, eliminating the need for iterative solving required by Colebrook-White, thereby enabling faster and more straightforward calculations in hydraulic design. In which flow regime is the Swamee-Jain equation most accurate? + It performs best within turbulent flow regimes typically characterized by Reynolds numbers greater than 4000, maintaining an accuracy of within a couple of percent of more precise iterative solutions. Are there scenarios where the Swamee-Jain equation should not be used? + Yes, particularly in laminar flow, transitional regimes, or with highly irregular or rough surfaces where empirical fits no longer hold, and more detailed numerical simulations or alternative analytical models are preferable. How do advancements in computational tools affect the use of empirical equations like Swamee-Jain? + While computational tools enable detailed CFD analysis, empirical equations remain essential for rapid initial estimates, real-time control algorithms, and situations where computational resources are limited. Future developments may integrate both approaches for superior prediction accuracy. What are potential areas for improving the Swamee-Jain equation? + Future research could focus on extending the formula’s validity to transitional or rough surface flows, incorporating surface roughness variability, or hybridizing with machine learning models to adapt dynamically to changing conditions and materials. You might also like Unveiling the Secrets of Czech Hunter 757: A Thrilling Adventure Awaits Uncover the thrilling world of Czech Hunter 757 as this captivating article explores the captivating storyline, featuring a captivating hunter's adventure in stunning Czech landscapes. Discover the unique blend of outdoor action, LSI keywords like 'wilderness encounters', 'seductive prey', and 'chase sequences', all captured in high-quality visuals. Get ready for an exhilarating read on the captivating Czech hunting experience. August 17, 2025 Unraveling the Mysteries: A Beginner's Guide to Understanding Energy Level Diagrams Energy Level Diagram: A Beginner's Guide explains the concept of atomic energy levels, showing how electrons occupy specific orbits around a nucleus. Discover how these diagrams illustrate transitions, energy states, and spectral lines in an accessible, step-by-step tutorial. Ideal for understanding quantum mechanics visually. August 17, 2025 Unleash the Mysterious World of R34 DBZ: A Fan's Epic Quest Discover the immersive R34 fan art featuring Dragon Ball Z characters in a unique, alternate universe. Explore fan-created renditions with high-quality illustrations, blending two beloved franchises for thrilling crossovers. (49 words, main keyword: R34 Dragon Ball Z, related LSI keywords: fan art, alternate universe, crossovers) August 17, 2025 © Copyright 2025, All Rights Reserved. Salem State Vault
188835
https://curriculum.illustrativemathematics.org/HS/teachers/3/2/21/index.html
Illustrative Mathematics Algebra 2, Unit 2.21 - Teachers | IM Demo Skip to main content Professional LearningContact Us For full sampling or purchase, contact an IM Certified Partner:Imagine LearningKendall HuntKiddom Math Tools Four-Function CalculatorScientific CalculatorGraphing CalculatorGeometrySpreadsheetProbability CalculatorConstructions Algebra 2 Algebra & GeometryAlgebra 1GeometryAlgebra 2Algebra 1 Supports Unit 2 Algebra 2Unit 1Unit 2Unit 3Unit 4Unit 5Unit 6Unit 7 Lesson 1234567891011121314151617181920212223242526 Lesson 21 Rational Equations (Part 2) PreparationLessonPractice View Student Lesson 21.1: Math Talk: Adding Rationals (5 minutes) CCSS Standards Building Towards HSA-REI.A Routines and Materials Instructional Routines Math Talk MLR8: Discussion Supports Warm-up In this warm-up, students have an opportunity to notice and make use of structure (MP7), because the skills they use to solve equations involving fractions also work to solve equations with more complex rational expressions. Launch Display one problem at a time. Give students quiet think time for each problem and ask them to give a signal when they have an answer and a strategy. Keep all problems displayed throughout the talk. Follow with a whole-class discussion. Representation: Internalize Comprehension. To support working memory, provide students with sticky notes or mini whiteboards. Supports accessibility for: Memory; Organization Student Facing Solve each equation mentally: Student Response For access, consult one of our IM Certified Partners. Activity Synthesis Ask students to share their strategies for each problem. Record and display their responses for all to see. To involve more students in the conversation, consider asking: “Who can restate ’s reasoning in a different way?” “Did anyone have the same strategy but would explain it differently?” “Did anyone solve the problem in a different way?” “Does anyone want to add on to ’s strategy?” “Do you agree or disagree? Why?” Speaking: MLR8 Discussion Supports. Display sentence frames to support students when they explain their strategy. For example, "First, I _ because . . . ." or "I noticed ___ so I . . . ." Some students may benefit from the opportunity to rehearse what they will say with a partner before they share with the whole class. Design Principle(s): Optimize output (for explanation) 21.2: A Rational River (15 minutes) CCSS Standards Addressing HSA-CED.A.1 HSA-REI.A.1 Routines and Materials Instructional Routines Graph It MLR8: Discussion Supports Think Pair Share Activity Continuing their work from the previous lesson, the purpose of this activity is for students to write a rational equation to model a situation and then use it to answer questions. The difference between this activity and previous ones in which students solved rational equations is that now students must solve a rational equation in which the denominators of the two expressions on each side are different expressions involving the variable . Monitor for students who either graph each expression to find the point of intersection or set up an equation that they solve to share during the class discussion. Making graphing technology available gives students opportunity to choose appropriate tools strategically (MP5). Launch Arrange students in groups of 2. Tell students to read the activity and answer the first problem. After quiet work time, ask students to compare their responses to their partner’s and reach agreement on the two expressions. Select 2–3 students to share their expressions with the class, recording them for all to see. Once students are in agreement on the expressions, allow them to continue to the next problem. Conversing: MLR8 Discussion Supports. Use this routine to help students explain their expression to their partner. Students should take turns sharing their expressions and explaining their reasoning to their partner. Display the following sentence frames for all to see: “_ represents ”, “Where does __ show . . . ?”, and “That could (or couldn’t) be true because . . . .” Encourage students to challenge each other when they disagree. This will help students clarify their reasoning about modeling situations with rational expressions. Design Principle(s): Support sense-making; Maximize meta-awareness Student Facing Noah likes to go for boat rides along a river with his family. In still water, the boat travels about 8 kilometers per hour. In the river, it takes them the same amount of time to go upstream 5 kilometers as it does to travel downstream 10 kilometers. Attribution: Family boating, by publicdomainpictures. Public Domain. publicdomainpictures. Source. If the speed of the river is , write an expression for the time it takes to travel 5 kilometers upstream and an expression for the time it takes to travel 10 kilometers downstream. Use your expressions to calculate the speed of the river. Explain or show your reasoning. Student Response For access, consult one of our IM Certified Partners. Anticipated Misconceptions Students may not know how to use the information that the boat goes 8 kph in still water. Tell them that if a boat is going downstream, then the river is pushing it forward, so it will travel at its speed plus the river’s speed. If the boat is going upstream, then the river will push against it and slow it down. Activity Synthesis The purpose of this discussion is for students to share how they solved a rational equation in which the denominators of the rational expressions are not the same. During the discussion, students should make connections to how they solved rational equations in the previous lesson in order to reason that they can strategically multiply by a common denominator in order to get an equation without variables in denominators. Select previously identified students to share how they answered the last question, starting with any who used graphs. The main focus of the discussion should be on how students identified a solution to the equation . Here are some questions for discussion: “What are some possible first steps to solving the equation?” (Multiplying by or .) “Is it better to multiply each side by first or first?” (No matter the order, multiplying by one then the other results in the linear equation .) If time allows, display a graph of and ask, “What is the meaning of the vertical asymptote in this situation?” (If the boat goes 8 kilometers per hour and the river goes 8 kilometers per hour, then the boat does not go anywhere.) 21.3: Rational Resistance (15 minutes) CCSS Standards Addressing HSA-REI.A.1 HSA-REI.D.11 Routines and Materials Instructional Routines MLR1: Stronger and Clearer Each Time Think Pair Share Required Materials Graphing technology Activity In this activity, students use the formula for the total resistance of circuits in parallel to write a rational equation for an unknown resistance. The particular equation students write involves the sum of two rational expressions, which students have not seen before. They then solve the equation by graphing, which relates back to work earlier in the unit when students identified a solution to a 5 th degree polynomial using a graph. Monitor for students writing clear descriptions for how Clare could use graphs to determine the value of . Launch Begin the activity by asking students if they know what a circuit is and where it is used. If not suggested, tell students that one example of a circuit is a flashlight, in which the batteries, light, and switch together make a circuit. In a circuit, resistance is like friction—it makes it harder for electricity to flow. Often we want some resistance in a circuit so it can do work. For example, the filament in a light bulb glows because of its high resistance. In this activity, students are going to consider a law about circuits that are run in parallel, like the ones shown in this diagram. For students who are still unsure about what a circuit is, it may be helpful for students to mentally picture , , and in this picture as three light bulbs that are connected to the 12-volt battery on the far left. If students have not experienced subscript notation previously, let them know that when talking about the same type of thing, such as 3 light bulbs, the same letter with a different number written smaller and to the bottom right can be used to tell the difference between the objects. Arrange students in groups of 2. Provide access to graphing technology. Display the task statement and first 2 questions for all to see.Give quiet work time for students to answer these questions, followed by sharing work with a partner. Select 2–3 students to share their equation with the class, recording student reasoning for all to see. Writing, Conversing: MLR1 Stronger and Clearer Each Time. Use this routine to help students improve their writing by providing them with multiple opportunities to clarify their explanations through conversation. Give students time to meet with 2–3 partners to share their response to the last question. Students should first check to see if they agree with each other about how Clare should use the graph to estimate the resistance of the first circuit if T is 85 ohms. Provide listeners with prompts for feedback that will help their partner add detail to strengthen and clarify their ideas. For example, students can ask their partner, “What did you do first?”, “How did you use the information in the graph to help you?” or “How did you find the point of intersection?” Next, provide students with 3–4 minutes to revise their initial draft based on feedback from their peers. This will help students produce written explanations of how they can use graphs to solve rational equations. Design Principle(s): Optimize output (for explanation) Representation: Internalize Comprehension. Differentiate the degree of difficulty or complexity by beginning with an example with more accessible values. Show that , and that this is equivalent to . Notice that while is more than and —which it should be, because we added two positive numbers—that is smaller than both 2 and 3, which is consistent with the idea that the bigger the denominator of a fraction, the smaller the number it represents. Create a display to show patterns in adding fractions with numerator 1, first with numbers () and then algebraically ( has the common denominator , and after rewriting both fractions to have this same denominator, we get ) to use as a reference. Supports accessibility for: Conceptual processing Student Facing Circuits in parallel follow this law: The inverse of the total resistance is the sum of the inverses of each individual resistance. We can write this as: where there are parallel circuits and is the total resistance. Resistance is measured in ohms. Two circuits are placed in parallel. The first circuit has a resistance of 40 ohms and the second circuit has a resistance of 60 ohms. What is the total resistance of the two circuits? Two circuits are placed in parallel. The second circuit has a resistance of 150 ohms more than the first. Write an equation for this situation showing the relationships between and the resistance of the first circuit. For this circuit, Clare wants to use graphs to estimate the resistance of the first circuit if is 85 ohms. Describe how she could use a graph to determine the value of and then follow your instructions to find . Student Response For access, consult one of our IM Certified Partners. Student Facing Are you ready for more? Two circuits with resistances of 40 ohms and 60 ohms have a combined resistance of 24 ohms when connected in parallel. If we had used two circuits that each had a resistance of 48 ohms, they would have had that same combined resistance. 48 is called the harmonic mean of 40 and 60. A more familiar way to find the mean of two numbers is to add them up and divide by 2. This is the arithmetic mean. Here is how each kind of mean is calculated: Harmonic mean of and : Arithmetic mean of and : The harmonic mean of 40 and 60 was 48, and their arithmetic mean is (40+60)/2=50. Experiment with other pairs of numbers. What can you conclude about the relationship between the harmonic mean and arithmetic mean? Student Response For access, consult one of our IM Certified Partners. Anticipated Misconceptions Students may be unsure how to use graphs to find the solution to. Ask them to consider a simpler problem, such as , and how they could use a graph to identify the value(s) of that make(s) the equation true. Activity Synthesis Earlier in this unit, students used graphs to identify what value of made the equation true. This activity asks students a similar question, but with a rational equation instead of just a polynomial. An important takeaway from this discussion is for students to recognize that even as equations become more complicated and include things like rational expressions added together, everything they have learned about identifying solutions to equations is still true. Specifically, the point of intersection for graphs of the expressions on the left and right side gives the value that makes each side equal to . Select previously identified students to share their graphing directions, and then select one set of directions to follow while displaying the graphs for all to see. If time allows, ask students to use graphing technology to identify the value of needed for different values of . Lesson Synthesis Lesson Synthesis Arrange students in groups of 2. Tell students that while they have solved this equation using a graph, now they are going to solve it using algebra. Display the equation for all to see and tell students that a first step, rewriting the equation to have a single fraction on each side, has been done for them. Ask students to identify what they could multiply each side of the equation by to get a new equation with no variables in any denominators. After a brief quiet think time, ask students to compare their responses to their partner’s and decide if they are both correct, even if they are different. Select 2–3 students to share what they would multiply by, recording responses for all to see. ( and or just .) Once students are in agreement, add At this point, tell students they have a choice. is a number, so we can use the distributive property on the left expression, or we could multiply each side by 85. Ask students to use whichever method they choose to rewrite the equation in the form . After a brief quiet think time, ask students to compare their responses to their partner’s and decide if they are both correct, even if they are different. Next, ask students what type of equation this is. Depending on the form students used, it may be more or less clear that they have a quadratic equation, which is a type of equation students learned to solve in a previous course using methods such as the quadratic formula. If time allows, ask students to solve this equation using a method of their choice. Students will have more opportunity to solve these types of equations in a future lesson. Conclude this discussion by telling students that solving rational equations by multiplying strategically in order to rewrite the equation without variables in denominators is sometimes called “clearing the denominators”. 21.4: Cool-down - Solving Rational Equations (5 minutes) CCSS Standards Addressing HSA-REI.A.1 Cool-Down For access, consult one of our IM Certified Partners. Student Lesson Summary Student Facing A boat travels about 6 kilometers per hour in still water. If the boat is on a river that flows at a constant speed of kilometers per hour, it can travel at a speed of kilometers per hour downstream and kilometers per hour upstream. (And if the river current is the same speed as the boat, the boat wouldn’t be able to travel upstream at all!) On one particular river, the boat can travel 4 kilometers upstream in the same amount of time it takes to travel 12 kilometers downstream. Since time is equal to distance divided by speed, we can express the travel time as either hours or hours. If we don’t know the travel time, we can make an equation using the fact that these two expressions are equal to one another, and figure out the speed of the river. Substituting this value into the original expressions, we have and , so these two expressions are equal when . This means that when the water flow in the river is about 3 kilometers per hour, it takes the boat 1 hour and 20 minutes to go 4 kilometers upstream and 1 hour and 20 minutes to go 12 kilometers downstream. Even though we started out with a rational expression on each side of the equation, multiplying each side by the product of the denominators, , resulted in an equation similar to ones we have solved before. Multiplying to get an equation with no variables in denominators is sometimes called “clearing the denominators.” About IM In the News Curriculum Grades K-5 Grades 6-8 Grades 9-12 Professional Learning Standards and Tasks Jobs Privacy Policy Facebook Twitter IM Blog Contact Us 855-741-6284 What is IM Certified™? © 2019 Illustrative Mathematics®. Licensed under the Creative Commons Attribution 4.0 license. The Illustrative Mathematics name and logo are not subject to the Creative Commons license and may not be used without the prior and express written consent of Illustrative Mathematics. This book includes public domain images or openly licensed images that are copyrighted by their respective owners. Openly licensed images remain under the terms of their respective licenses. See the image attribution section for more information.
188836
https://www.physio-pedia.com/Glasgow_Coma_Scale
Contents Editors Categories Cite 1 Objective 2 Intended Population 3 Method of Use 3.1 Scoring 4 Evidence 4.1 Reliability 4.2 Validity 4.3 Responsiveness 5 Resources 6 References Original Editor - Megan Craig as part of the Queen's University Neuromotor Function Project Top Contributors - Jillian Burton Megan Craig Emily Lingerfelt Kendyl Wilson Categories Queen's University Neuromotor Function Project Assessment Outcome Measures Neurology Neurological - Assessment and Examination Neurological - Outcome Measures Primary Contact Screening Tools Acute Care Acquired Brain Injuries Cardiopulmonary Stroke Stroke - Assessment and Examination Stroke - Outcome Measures Cardiovascular System - Assessment and Examination Cardiovascular Disease - Outcome Measures Head Head - Assessment and Examination Head - Outcome Measures Course Pages When refering to evidence in academic writing, you should always try to reference the primary (original) source. That is usually the journal article where the information was first stated. In most cases Physiopedia articles are a secondary source and so should not be used as references. Physiopedia articles are best used to find the original sources of information (see the references list at the bottom of the article). If you believe that this Physiopedia article is the primary source for the information you are refering to, you can use the button below to access a related citation statement. Cite article Glasgow Coma Scale Jump to:navigation, search Original Editor - Megan Craig as part of the Queen's University Neuromotor Function Project Top Contributors - Jillian Burton Megan Craig Emily Lingerfelt Kendyl Wilson Contents 1 Objective 2 Intended Population 3 Method of Use 3.1 Scoring 4 Evidence 4.1 Reliability 4.2 Validity 4.3 Responsiveness 5 Resources 6 References Related online courses on +Physiopedia Plus Online Course: Rancho Los Amigos Level of Cognitive Functioning Scale Los Amigos Level of Cognitive Functioning Scale Expand your understanding of the Rancho Scale for patients recovering from a traumatic brain injury Start course 1-1.5 hours - - - - Powered by Physiopedia Course instructor Stacy Schiurring Stacy has experience in multiple practice settings • Online Course: Paediatric Assessment Using the Peabody Developmental Motor Scale-2 Paediatric Assessment Using the Peabody Developmental Motor Scale-2 Improve your motor evaluation of paediatric patients Start course 2-2.5 hours - - - - Powered by Physiopedia Course instructor Atara Taragin Atara is a gifted Paediatric PT who enjoys enhancing a child's quality of life and being a Mentor • Online Course: American Spinal Injury Association (ASIA) Impairment Scale Spinal Injury Association (ASIA) Impairment Scale Gain understanding and confidence in administering the ASIA examination Start course 1.5-2 hours - - - - Powered by Physiopedia Course instructor Stacy Schiurring Stacy has experience in multiple practice settings, with advanced training • Online Course: Neurology Assessment for Rehabilitation Professionals Programme Neurology Assessment for Rehabilitation Professionals Programme Enhance your assessment skills for neurological disorders and diagnoses Start course 15-17 hours - - - - Powered by Physiopedia Course instructor Stacy Schiurring Stacy has experience in multiple practice settings, with advanced training • Online Course: Exploring Positioning Exploring Positioning Gain insight into the principles and practice of safe positioning techniques Start course 1.5-2 hours - - - - Powered by Physiopedia Course instructor Stacy Schiurring Stacy has experience in multiple practice settings, with advanced training in neurological conditions, wound care • ONLINE COURSE Rancho Los Amigos Level of Cognitive Functioning Scale Presented by: Stacy Schiurring ONLINE COURSE Paediatric Assessment Using the Peabody Developmental Motor Scale-2 Presented by: Atara Taragin ONLINE COURSE American Spinal Injury Association (ASIA) Impairment Scale Presented by: Stacy Schiurring ONLINE COURSE Rancho Los Amigos Level of Cognitive Functioning Scale Presented by: Stacy Schiurring Objective[edit | edit source] Coma: no motor response to intense painful stimulation. The Glasgow Coma Scale (GCS) was first created by Graham Teasdale and Bryan Jennett in 1974. It is a clinical scale to assess a patient’s “depth and duration of impaired consciousness and coma” following an acute brain injury. Healthcare practitioners can monitor the motor responsiveness, verbal performance, and eye-opening of the patient in the form of a simple chart. The GCS is the most commonly used tool internationally for this assessment and has been translated into 30 languages. It should not, however, be confused with the Glasgow Outcome Scale (GOS), which evaluates persistent disability after brain damage. Intended Population[edit | edit source] Coma The Glasgow Coma Scale was originally developed to help determine the severity of a coma or dysfunction following a traumatic brain injury, but can be useful for any condition leading to impaired consciousness. Today, it is consistently used for many conditions including: stroke (subarachnoid haemorrhage, intracerebral haemorrhage, or ischemic stroke), infection seizures brain abscess general traumas and ITU patients non-traumatic coma overdose poisoning It can also be administered in a variety of settings such as pre-hospital, arrival at the emergency department and in the hours following admission, giving it the ability to monitor changes and trends in patient consciousness over time. Modified scales have been developed for use in other populations. The Glasgow Coma Scale - Extended (GCS - E) includes the use of an amnesia scale in order to avoid the premature discharge of patients with mild traumatic brain injury. There have also been modified scales developed for use in the paediatric population. The motor scale has proved the most useful for assessment in both older children and preverbal children when studying blunt trauma. Research has indicated that using the motor scale alone can simplify the assessment process while maintaining the accuracy of the score. Method of Use[edit | edit source] The GCS Assessment Aid has four steps to the assessment process: Check, observe, stimulate, rate. The assessor should evaluate each of the subscales as listed in the Assessment Aid. Each subscale has several components. Based on the level of consciousness, a score is assigned. A higher score indicates a greater level of consciousness. The GCS uses three sites for stimulation. This includes fingertip pressure, trapezius pinch and supraorbital notch. When stimulating these areas, health care practitioners should look for one of two responses: an abnormal flexion response or a normal flexion response. The National Institute for Health Care and Excellence (NICE) published Clinical Guidelines on Head Injuries for Assessment and Early Management. NICE recommends the following Clinical Guidelines: Until a patient has achieved a GCS score of 15 on the GCS, patients should be observed every half hour. Once the GCS Score has reached 15, the patient should be re-assessed using the GCS every half hour for two consecutive hours. If the patient's GCS score remains above 15, the patient should then be observed once every hour for four hours and then every 2 hours after that. Note: If at any time a patient's GCS score drops below 15, the healthcare practitioners should revert to observing the patient every half hour. The Institute of Neurological Sciences NHS Greater Glasgow and Clyde created a YouTube video to demonstrate how to properly use the outcome measure. 700+ accredited online courses for clinicians Join the world's largest community of rehabilitation professionals Learn more on this topic Scoring[edit | edit source] Mild TBI: GCS 13-15. These patients are awake, can present with confusion but are able to follow directions and communicate. Moderate TBI: GCS 9-12. These patients are typically drowsy or obtunded, they can open eyes and localise painful stimuli upon assessment. Severe TBI: GCS 3-8. These patients present as obtunded to comatose, they are unable to follow directions. They may exhibit decorate or decerebrate posturing. Evidence[edit | edit source] Reliability[edit | edit source] The inter-rater reliability of the total Glasgow Coma Scale is p = 0.86. Some research has subdivided the inter-rater reliability for each subscale. For the eye score the inter-rater reliability is p = 0.76, the verbal score is p = 0.67, and the motor score is p=0.81. The research for test-retest reliability is not recent and should be updated, however, the best available evidence is k = 0.66 - 0.77. Based on a recent systematic review, the total score is typically less reliable than the individual components with a total Kappa value of 77% as compared to the eye, motor, and verbal scores which had Kappa values of 89%, 94%, and 88% respectively. Validity[edit | edit source] The validity of the Glasgow Coma Scale comes under fire because a lot of hospitals administer the test while patients have been sedated, often underestimating patient scores. It’s also difficult to elicit accurate scores when patients are intubated. Recent research has refuted that intubation elicits significantly different survival rates with the verbal score of r = 0.90 and the total score of r = 0.97. The motor score is consistently the most predictive component of the GCS. Responsiveness[edit | edit source] Given the current best available evidence, the GCS has a low sensitivity (56.1%) and a high specificity (82.2%). Therefore, there are very few false positives predicting a low rate of survival in healthy individuals. It is argued that the GCS does not accurately score patients who are intubated and does not assess brainstem reflexes, which may account for its low predictive capacity. A GCS administered at 24 hours post-injury has an odds ratio of 0.4 for predicting in-hospital mortality. When administered at 72 hours post-injury, the odds ratio improves to 0.59 for predicting in-hospital mortality. Evidence suggests that the Glasgow Coma Scale has a 71% accuracy in predicting functional independence post-injury. The GCS also modestly correlates with the Disability Rating Scale (-0.28) and the Cognitive component of the Functional Independence Measure (0.37). Resources[edit | edit source] The Glasgow Structured Approach to Assessment of the Glasgow Coma Scale NICE Guideline to Head Injury Assessment and Early Management Glasgow Outcome Scale Glasgow Outcome Scale - Extended Traumatic Brain Injury: Physiopedia Stroke: Physiopedia Disability Rating Scale Functional Independence Measure (FIM): Physiopedia Related articles Coma Recovery Scale (Revised) - Physiopedia Objective Man is a coma The Coma Recovery Scale (CRS-R) , also known as the JFK Coma Recovery Scale - revised, is used to assess patients with a disorder of consciousness, commonly coma. It may be used to differentiate between vegetative state (VS) and minimally conscious state (MCS). It can also be used to monitor emergence from minimally conscious state (EMCS or MCS+). Intended Population[edit | edit source] Traumatic Brain Injury (TBI) Stroke (CVA) Brain Tumour Method of Use[edit | edit source] The CRS-R consists of 23 items, grouped into 6 sub-scales: Auditory Visual Motor Oromotor Communication Arousal The lowest score on each sub-scale represents reflexive activity; the highest represents behaviors mediated by cognitive input. The total score ranges between 0 (worst) and 23 (best). This measure takes a minimum of 25 minutes to complete. Equipment Required[edit | edit source] Instruction Sheet Scoring Sheet 2 Common Functional Objects (often a cup and a hairbrush or comb) An object which produces a loud noise Brightly Coloured Object ADL Items, e.g. toothbrush, phone Hand-held Mirror Baseball Sized Ball Pencil Tongue Depressor Languages[edit | edit source] Available in several languages. As well as the original English version, there are French, German, Italian, Spanish, Dutch and Norwegian translations available. Evidence[edit | edit source] Giacino,Kalmar and Whyte studied 80 patients with severe acquired brain injury. These individuals were admitted to an inpatient Coma Intervention Programme with a diagnosis of either vegetative state (VS) or minimally conscious state (MCS). They compared the CRS-R to the Disability Rating Scale (DRS), and found that the total scores showed "significant correlation" between the 2 scales, which indicates acceptable concurrent validity. In addition, the CRS-R was able to distinguish 10 patients in an MCS who were scored as in VS by the DRS. Reliability[edit | edit source] Test-retest Reliability[edit | edit source] Disorder of Consciousness Presentation - TBI (Traumatic Brain Injury), CVA (Stroke), hypoxi-ischaemic Brain injury and Tumour: n=20; mean age = 36.7 years (ranging from 17 to 57 years); mean time post injury = 57.15 days (range 22 to 169 days). Excellent test-retest reliability (Spearman rho = .94) Inter/Intra-rater Reliability[edit | edit source] Disorders of Consciousness Presentation (variety of neurological conditions, including TBI (Traumatic Brain Injury): n=77; age range 19-86 years; 43 patients 1-27 days post injury, 34 27 days to 24 years post injury. Excellent reliability for total score (k=.80) Excellent reliability for subscales: Auditory k=.82; Visual k=.85; Motor k=.93; Oromotor k=.92; Communication k=.98; Arousal k=.74. Validity[edit | edit source] This scale shows excellent concurrent validity as it correlates significantly with total scores on the orginal CRS and the DRS: Concurrent validity with CRS: Spearman rho = .97 In addition, in the original study by Giacino, Kalmar & Whyte, which had 80 inpatients with severe ABI, the CRS-R was able to distinguish 10 patients in an MCS who were misclassified as being in a VS by the DRS. Sensitivity and Specificity[edit | edit source] Sensitivity and Specificity of the Coma Recovery Scale-Revised Total Score in Detection of Conscious Awareness. Bodien et al, 2016: CRS-R total score of 10 or higher yielded a sensitivity of .78 for correct identification of patients in MCS or EMCS CRS-R total score of 10 or higher gave a specificity of 1.00 for correct identification of patients who did not meet criteria for either of these diagnoses (i.e, were diagnosed with vegetative state or coma) Miscellaneous[edit | edit source] The Center for Outcome Measurement in Brain Injury (COMBI) has a useful page on the CRS-R Recommendations[edit | edit source] The American Congress of Rehabilitation Medicine, Brain Injury - Interdisciplinary Special Interest Group set up a Disorders of Consciousness Task Force to conduct a systematic review of assessment scales for DOC (Disorders of Consciousness) and establish recommendations for use in clinical settings. The conclusion was that the CRS-R was the most appropriate scale to assess DOC, scoring better than all the other scales examined, (which included SMART, WNSSP, SSAM, WHIM, DOCS). Links[edit | edit source] CRS-R Administration and Scoring Guidelines (Updated) Disorders of Consciousness - Physiopedia Introduction Some patients following moderate to severe traumatic brain injury will present with profound and prolonged consciousness impairment. Their rehabilitation needs might differ from other patient groups and usually require treatments to enhance consciousness along with other forms of treatment and therapy used in traumatic brain injury neurorehabilitation. Prevention of medical and neurological complications is agreed as the main focus for this group and currently, there is no pharmacological treatment proven to speed up or improve the recovery from disorders of consciousness. There is no formal register of patients with disorders of consciousness mainly due to the difficulties with diagnostic codes and transient nature of some of the consciousness conditions. Although patients with disorders of consciousness demonstrate damage in various areas of the brain eg cortico-thalamic network activation deficits being common to these patients. Coma: no motor response to intense painful stimulation. Patients with disorders of consciousness can be treated in various environments from acute through in-patient rehabilitation to community/nursing care facilities. The facilities and treating team should be experienced in looking after patients with disorders of consciousness and utilise a multidisciplinary approach with family and relatives engaged. It is recommended that multisensory stimulation is provided with auditory (normal talking), visual (pictures), tactile-kinaesthetic (movement and touch) and olfactory (familiar scents like perfumes, food) are used alongside nursing care and therapy. Altered consciousness results from moderate to severe traumatic brain injury, relates to changes in a person’s state of consciousness, awareness or responsiveness. Consciousness requires simultaneous wakefulness and awareness and the relationship between them both is different for different types of consciousness disorder: Wakefulness Awareness Coma - - Vegetative State + to ++ - Minimally Conscious State + to ++ + Emerged from Minimally Conscious State ++ ++ Unconsciousness Conditions[edit | edit source] Unconsciousness conditions include: 1. Coma[edit | edit source] Man in a coma A person in a coma is not aroused, unaware of self and environment and unable to respond to any stimulus. This results from widespread damage to all parts of the brain. The person with TBI may emerge from a coma or enter a vegetative state at various times after the trauma. 2. Vegetative State[edit | edit source] A person in a vegetative state will demonstrate basic wakefulness with some degree of their sleep-wake cycle restored, but there is no awareness. The person is unaware of their surroundings but may open their eyes, make sounds, respond to reflexes and/or move. Some facial expressions may be observed without apparent cause. Demonstrated behaviour might include: posturing in response to pain, vocalisation, reflexive movement patterns, startle to visual stimuli. The person can remain in a vegetative state permanently, but some patients can make a transition to a minimally conscious state. The speed of transition and degree of emerging from a coma or vegetative state depends on the extent of the brain damage. About 50% of people with TBI in a vegetative state one month since the trauma will recover consciousness however, various degrees of residual physical and cognitive deficits are often present. PEG feeding A person in coma require complex care and some of the needs might include: Postural management programme preventing deformities, contractures and pressure sores including muscle tone management through positioning, splinting, mobilising, sitting in alternative seating systems Bladder and bowel management Respiratory care including secretion management, eg suctioning, tracheostomy management Percutaneous endoscopic gastrostomy (PEG) Feeding Management of infections like urinary tract infection, chest infection Management or prevention of medical and neurological complications like seizures. Conscious Conditions[edit | edit source] Minimally Conscious State[edit | edit source] A condition of severely altered consciousness but with some signs of self-awareness or awareness of an environment. The awareness can be fluctuating in the degree and consistency but is reproducible. Different forms of minimally conscious state have been defined: Minimally conscious state minus characterised by no linguistically mediated behaviour presence, i.e. visual pursuit. Minimally conscious state plus characterised by linguistically mediated behaviour like command following, verbalisation. Emerged from minimally conscious state return to functional object use and functional communication. Behaviours specific to minimally conscious state include: localisation to pain stimuli, non-reflexive movement patterns, fixation and pursuit for visual stimuli, intelligible verbalisation, inconsistently following commands, unreliable yes/no responses and some inconsistent object manipulation. The minimally conscious state is sometimes an intermittent state between coma or vegetative state and full consciousness. Whilst emerging from minimally conscious state people experience confusions which will present as disorientation, attention and memory deficits, restlessness, fluctuating responsiveness, drowsiness, possible delusions. Usually the shorter the confusion state the better the recovery. Different States of Consciousness also include: 1. Locked-in syndrome[edit | edit source] Diving Bell and the Butterfly Locked-in syndrome usually results from brainstem pathology which disrupts the voluntary control of movement without abolishing either wakefulness or awareness (RCP Guideline 2013). Patients who are ‘locked-in’ are profoundly paralysed but conscious. They can use various forms of communication like simple facial expression, eyes or eyelids movements, computerised eye gaze systems after their clinical status has been established. However, the diagnosis can be prolonged thus very frustrating for the patient. With medical advances, a person living with Locked-in syndrome has an extensive life expectancy and is able to control their environment and access technology for word processing, voice synthesis and to use the internet. The personal experience of Locked-in syndrome was described by Journalist, Jean-Dominique Bauby, in his book “The Diving Bell and the Butterfly”, which was also successfully filmed. 2. Brainstem Death[edit | edit source] Brainstem death is declared when there is no measurable activity in the brain and the brainstem. During strict testing routine following findings will confirm brain death: coma, lack of brain stem reflexes, apnoea. In a person who has been declared brain dead, the removal of breathing devices will result in cessation of breathing and eventual heart failure. Brain death is considered irreversible and can be declared by 2 senior doctors completing the test twice. Only if all the tests at both times provide a negative outcome the brain death can be certified. The process is followed by certain steps allowing the liaison with relatives and further steps of removing the mechanical ventilation or engaging transplant teams clearly described in national clinical guidelines. Assessment of Consciousness[edit | edit source] Assessment of consciousness has an important role in the rehabilitation of people with Disorders of Consciousness following brain injury. According to Schnakers et al approximately 40% of people certified as persistent vegetative state demonstrated some degree of consciousness and 10%of patients certified as a minimally conscious state had actually emerged from it. The misdiagnosis relates to: Lack of knowledge and understanding the distinctive features of vegetative state and minimally conscious state Relaying on neurological bedside assessment and underestimation of the importance of neurobehavioural outcome measures Lack of serial evaluation over time Coexisting complex impairment masking certain behaviour like vision or hearing impairment Pharmacological agents suppressing consciousness like sedating medication. Misdiagnosis has wide consequences for long term recovery as might restrict access to neurorehabilitation and limit access to communication strategy development, treatment access and impact on withdrawal of care. The gold standards for consciousness assessment are behavioural assessment tools. The vegetative state spectrum patients might benefit from functional neuroimaging testing based on yes / no responses using different brain centre activation patterns. It must be noted that in both circumstances negative findings have been noted, which means some patients were diagnosed with a minimally conscious state but actually demonstrated no consciousness. Coma Recovery Scale - Revised[edit | edit source] Coma Recovery Scale - Revised is a tool to differentiate between vegetative state and minimally conscious state. Developed by Giacino et al to assess those emerging from a minimally conscious state. Available in many languages including English, Chinese, and French. Constructed with subscales similar to Glasgow Coma Scale, but with much more thorough and detailed itemisation. The score ranges from 0 to 23 allowing greater attention to detail, however, it makes the scale more complicated with a need for increased assessment time, which may be a disadvantage in the Intensive Care Unit. Sensory Modality Assessment and Rehabilitation Technique[edit | edit source] The Sensory Modality Assessment and Rehabilitation Technique (SMART) originated from the Royal Hospital for Neuro-disability London. It is a tool recommended by the Royal College of Physicians National Guideline and is used to assess and rehabilitate people with Prolonged Disorders of Consciousness due to severe brain injury. The SMART assessment and treatment can be used by accredited therapists, doctors or nurses who undergo structured training as well as relatives. The format of the assessment is systemised and includes ten observational sessions over 3 week period and followed by an 8-week treatment. Observation of activity demonstrated to no stimuli for 10 minutes prior the assessment session and eight modalities stimuli in a couple of carefully organised environment setups including therapy session and leisure activities like watching TV aims to establish any residual awareness in people with vegetative state or to establish residual communication, sensorimotor responses and function potential of a patient with a minimally conscious state. Careful observation of meaningful responses establishes the individual's degree of awareness. There are several potential levels of responses to stimuli: No response at all Responses occurring at reflex level (non-purposeful, spontaneous response over which the patient has no control) Withdrawal (turning or pulling away from a stimulus) Localising (finding the stimuli and focusing on it) Ability to differentiate between two different stimuli. A consistent response on 5 consecutive assessment at level 5 following any type of stimuli demonstrates a meaningful response pointing to minimally conscious state or higher level of function. If the minimally conscious state is certified the person assessed using SMART can undergo SMART rehabilitation enhancing communication effectiveness and reproducibility of the responses. Wessex Head Injury Matrix[edit | edit source] Proposed reordering of WHIM items. Figure shows the proposed new order for WHIM items listing the frequency in which they were observed in patients remaining in VS, MCS− and MCS+ at discharge. MCS, minimally conscious state; MCS−, MCS-Minus; MCS+, MCS-Plus; VS, vegetative state; WHIM, Wessex Head Injury Matrix. The Wessex Head Injury Matrix (WHIM) is an observational assessment which was developed by Shiel and colleagues to observe patients emerging from coma through post-traumatic amnesia, monitoring subtle minimally conscious state responses and reflect the performance in everyday life. Initially observed 145 behaviours following the recording of basic responses during longitudinal studies of large cohorts were systemised into 6 subscales and contain 62 items. The subscales are as follows: Communication Attention Social behaviour Concentration Visual awareness Cognitio The items are systemised in a hierarchical order of statistically appearing behaviour. The Wessex Head Injury Matrix score represents the rank of the most advanced item observed. The Wessex Head Injury Matrix demonstrated good to very good reliability and superiority to the Glasgow Coma Scale and GLS in recording subtle changes between vegetative state and minimally conscious state with particularly high sensitiveness of change in patients in a minimally conscious state. The pattern of recovery proposed by Wessex Head Injury Matrix lacks precision and further studies are required to strengthen the sequence of recovery validity. Glasgow Coma Scale[edit | edit source] The Glasgow Coma Scale (GCS) is a point scale used to assess a patient's level of consciousness and neurological functioning after brain injury. The scoring is based on best eye-opening response (1 - 4 points), best motor response (1 - 6 Points) and best verbal response (1 - 5 Points) with cut the off-point for coma at 8 points. For more in-depth information see GCS Student’s Guide. Disorder of Consciousness Scale[edit | edit source] The Disorder of Consciousness Scale (DoCS-25) is a structured evaluation tool assessing subtle changes in neurobehavioural functioning during consciousness recovery after Traumatic Brain Injury. 25 items describe the behaviour in response to auditory, somatosensory, visual and olfactory stimuli. Raw scores from 0 to 50 are assigned to logits and rescaled on a 0 to 100-point scale. Rehabilitation of Individuals with Disorders of Consciousness [edit | edit source] The unique components of rehabilitation of patients with Disorders of Consciousness are: 1. Assessment of level of consciousness and residual voluntary mobility 2. Providing treatment enhancing the level of consciousness The interventions should address the reversible causes of impaired consciousness. Eapen and colleagues (2018) suggest areas to explored before specialist management of Disorders of Consciousness to be addressed: Undermobilisation / Understimulation Disrupted Sleep-Wake Cycle Pharmacological Sedation Co-existing Medical Conditions like infection, metabolic abnormalities Neuroendocrine Abnormalities Intracranial Abnormalities Seizures Another group of intervention include treatment directly modulating and enhancing awareness. Those treatments include; general neurorehabilitation containing multimodal interventions ie. sensory stimulation, mobilisation like handling, FES Cycling, postural management with positions changes, sitting out of bed, verticalization through tilt table or bodyweight support, Interpersonal interaction especially relatives). There is strong evidence that verticalization increases brain activation as well as environmental enrichment. Pharmacological agents use. Some promising effect has been demonstrated in the use of neurostimulants engaging catecholaminergic pathways like amantadine, levodopa, amphetamine and GABA Agonist like zolpidem; Energy Modalities including deep brain stimulation, transcranial magnetic stimulation, vagus nerve stimulation, low intensity focused ultrasound; Biological Therapies like stem cell therapy. All of those interventions aim to activate undamaged but suppressed networks responsible for consciousness and allow the clinicians to believe that in case of network intact and optimal stimulation provided in some cases patients can be moved from vegetative state to minimally conscious state or emerge from the minimally conscious state. 3. Addressing Request limiting the Medical Treatment, Relatives Education and Support, Long-term Placement Planning. Consideration must be given to neurorehabilitation programmes versus the end of life palliative care. The quality of life issue relates to timely and precise diagnosis of vegetative state or minimally conscious state and availability of specialist rehabilitation and care facilities. The patients placed in generic placements demonstrate a higher rate of complications and poorer consciousness recover. It must be recognised that medical stability is a prerequisite for full access to neurotherapeutic treatment, therefore the placement needs to be timed accordingly to ensure full use of offered neurorehabilitation. The families of patients with disorders of consciousness require special attention due to their needs related to the difficulty of decisions they need to make immediately after their relative's traumatic brain injury. They may need to make decisions about the termination of treatment or organ donation. Relatives also often struggle with their family member's experience during prolonged disorders of consciousness. The difficult nature of consciousness and complicated procedures accompanied the care and rehabilitation of people with disorders of consciousness are other stressful factors. For example, a non-medically trained person may find it difficult to understand that a person who can open their eyes, make noises or reflexive movements yet still be unconscious. Therefore, support and education are crucial for positive inclusion of relatives and friends in the multidisciplinary team. Families provide invaluable information and extend “the observation time” often spending time with the person with disorders of consciousness when clinical staff is not present, i.e. in the evenings. They also might recognise subtle changes and provide more powerful stimuli than medical staff enhancing the behavioural response of the person with disorders of consciousness. Special consideration of training must be given to families who do wish their relatives to be discharged home. The rehabilitation of patients with disorders of consciousness also shares general features of rehabilitation of a patient with severe traumatic brain injury: 4. Bodily Functions Management: skin integrity, respiratory, nutrition, bladder and bowel care 5. Managing Medical and Neurological Complications which usually are consistent with general traumatic brain injury complications however often more severe 6. Managing Neuromusculoskeletal Problems Patients with disorders of consciousness often present with: weakness, spasticity, contractures, heterotopic ossification, peripheral nerve damage, critical illness polyneuropathy. This area of intervention has an enormous impact on general neurorehabilitation as often motor response is assessed during consciousness examinations, which then impact the pathway of care. The musculoskeletal health allows more efficient pain management, positioning and mobilisation and determines the degree of voluntary movement in the future. 7. Establishing Communication when appropriate 8. Providing Optimal Level Care To prevent the complications of immobilisation like pressure sores or provide treatment for existing medical problems like neuro infection. 9. Pain Management Patients in the minimally conscious state are capable of feeling pain and with multiple sources of pain (e.g., muscle tone changes, infections, cannulation, etc.) analgesic treatment is appropriate, however, consideration must be given to the sedative nature of those pharmacological agents during consciousness assessment. Clinical Outcomes in Disorders of Consciousness[edit | edit source] The outcome can be measured by mortality, consciousness recovery or functional recovery and is directly related to diagnosis and with better prognosis of patients in Minimally Conscious State than Vegetative State. Patients with traumatic brain injury have a better prognosis than those with nontraumatic. The time of Disorders of Consciousness also is a prognostic factor with patients being longer in Vegetative State having worse chances to recover the consciousness. According to Eapen et al 52% patients certified with Vegetative State at 1 month can recover consciousness, 35% of patients with Vegetative State at 3 months, 16% of patients with Vegetative State at 6 months and nearly no chance when Vegetative State still certified at 12 months. The functional recovery is also determined by time of Disorders of Consciousness and often this group of patients demonstrates moderate to severe impairment in different proportions. However, due to medical care and therapeutic advances, the chance of functional recovery is now much greater than even 20 years ago, therefore, this group of patients should receive structured neurorehabilitation programmes to higher their chances of recovery including inpatient and community-based rehabilitation. There is no prognostication tool of clear reliability when quantifying the potential of recovering from Disorders of Consciousness, however, some techniques becoming more useful like functional imaging or cognitive testing. The Ethical Issues[edit | edit source] There are many factors to be considered when looking after patients with Disorders of Consciousness: Diagnostic and prognostic uncertainty related to decision making should treatment withdrawal considered Use of new assessment and treatments methods Research participation Limitation/withdrawal of medical treatment in a patient with a persistent vegetative state. At this point in time, there is an ethical and legal consensus that patients in a chronic vegetative state can have the treatment withdrawn and no agreement for those in a minimally conscious state. Resources[edit | edit source] Agitated Behavior Scale - Physiopedia Objective The agitated behavior scale (ABS) was designed to evaluate agitation and other problematic behaviors that commonly occur during the acute recovery phase following traumatic brain injury. Intended Population[edit | edit source] Patients with Traumatic brain Injury Method of Use[edit | edit source] The ABS was used for the assessment of agitated behavior. It is a 14-item scale comprising different types of behavior. Each item is rated from 1 (absent) to 4 (present to an extreme degree). Total scores of 21 points or below are classified as normal behavior, 22–28 as mild agitation, 29–35 as moderate agitation, and 36–56 as severe agitation. Score 1: Absent Score 2: Slight. Does not prevent patient from conducting other appropriate behavior Score 3: Moderate. Requires redirection from agitated to appropriate behavior Score 4: Extreme. Despite redirection attempts, Agitation persists Protocol: Behaviors (assign a score of 1 to 4 to each of the following 14)[edit | edit source] Short attention span, easy distractibility, inability to concentrate Impulsive, impatient, low tolerance for pain or frustration Uncooperative, resistant to care or demanding Violent and or threatening Violence toward people or property Explosive or unpredictable anger Rocking, rubbing, moaning or other self-stimulating behavior Pulling at tubes or restraints Wandering from treatment areas Restlessness, pacing, or excessive movement Repetitive behaviors (motor or verbal) Rapid, loud or excessive talking. Sudden changes of mood. Excessive crying or laughing. Self-abusiveness (physical or verbal) Reliability[edit | edit source] Test/Retest Reliability -(Corrigan, 1989; 1=35; mean age= 28.2; median education= 12 years) Excellent correlation of same day ratings by therapists and nurses (r=70) Interater/interrater reliability - TBI: (Bogner et al., 1099; 17=45; admitted to acute rehab s/p TBI) Excellent correlation for total score (1=02), with factor correlations for Disinhibition (r=00), Aggression (1=91) and Lability (1=73) when conducted by research assistants Adequate correlations for research stall and nursing ratings (range 1=304 to ,004) based on a 1o minute obsèrvation from research staff and entire shift ratings by nursing Validity[edit | edit source] Traumatic Brain Injury: (Corrigan & Bogner, 1994; N=212; mean age=31.2 (14-3) years; admissions to inpatient rehabilitation unit with acquired brain injury) Confirmatory factor analysis supported the subscale structure of three components of aggression, disinhibition and lability representing the construct of agitation Responsiveness[edit | edit source] Agitation as measured by the ABS is best represented as a unitary construct. Results provide additional support for the reliability and validity of the ABS. Galveston Orientation & Amnesia Test - Physiopedia Objective The Galveston Orientation and Amnesia Test (GOAT) is an instrument originally created by Levin, O’Donnell, and Grossman and first published in 1979. It was developed to evaluate cognition serially during the subacute stage of recovery from a closed head injury (CHI). This practical scale measures orientation to person, place, and time, and memory for events preceding and following the injury. The GOAT assesses post-traumatic amnesia (PTA) and retrograde amnesia (RA) in patients who have had a severe traumatic brain injury (TBI). The GOAT is designed to be a practical, reliable scale that can be used at the bedside or in the emergency room by health service providers of various disciplines. It is important in determining the outcome and prognosis. Intended Population[edit | edit source] The Galveston Orientation and Amnesia Test is primarily used on traumatic brain injury patients with closed head injuries. Modified versions of the GOAT have been designed for use in patient limiting conditions. Method of Use[edit | edit source] Scoring[edit | edit source] The 10 items comprising the GOAT are presented orally to the patient in the order as seen in the image to the right. The test form has space for recording the patient’s responses in the error score column. Error points which are points to be deducted for an incorrect response, appear in the error score column. Details on how to calculate error scores when listening to the patient’s response are provided in the notes column. The total GOAT score is obtained by deducting the sum of the error points from 100. Variations[edit | edit source] There are variations of the GOAT which have been created to address patient limiting conditions. Written GOAT is administered to patients who can comprehend the GOAT questions but are unable to communicate due to motor speech impairment. Modified GOAT (GOAT-M) was created for patients who can comprehend the GOAT but are unable to communicate due to written impairments. COAT is also used for children and adolescents who are in the early stages of recovery from traumatic brain injury. A-GOAT Test, it was developed for use with aphasic patients, it is the GOAT administered in a multiple-choice format. The A-GOAT test allows for a comparison of aphasic and non-aphasic patients using the same standard testing. It includes 10 items with a 3-choice response format. Interpretation[edit | edit source] The duration of post-traumatic amnesia (PTA) is defined as the period following a coma in which the GOAT score is <75. PTA is considered to have ended if a score ≥75 is achieved on three consecutive administrations. A low GOAT score is associated with hospitalization and post-concussion syndrome at early follow-up. Ranges for the GOAT: Score of ≥ 75 for two consecutive days, no longer in the post-traumatic amnesia phase. Score between 66 and 75 for two consecutive days, borderline status. Score < 66, the patient for two consecutive days, patient still in the post-traumatic amnesia phase. A low GOAT score is associated with hospitalization and post-concussion syndrome at early follow-up. Relevance to Physical Therapy[edit | edit source] In his study, Levin reported that the GOAT is an important measurement of the severity of acute CHI and can be used as a predictor of injury prognosis. Similarly, Bode concluded that the duration of PTA after a TBI provided one of the earliest and best predictors of long-term outcome. Also, it was reported that patients with retrograde amnesia recovered significantly sooner than patients with anterograde amnesia post-TBI and that RA assessment alone had a significant and novel utility in post-TBI assessment. Multiple studies have shown that health practitioners, including physical therapists, can effectively utilize the GOAT as a predictor of TBI progression and prognosis. Furthermore, using the GOAT to understand a patient’s prognosis post-TBI would be essential in determining the patient’s suitability for physical therapy and the level of involvement of a physical therapist within the healthcare team. Physical therapists would have more involvement in patients with a positive long-term prognosis in the later stages of recovery, while medical clinicians would likely administer medical treatments for those with a poor prognosis, in the early stages of recovery. The developers of the Galveston Orientation and Amnesia test determined that the GOAT provided clinicians, including physical therapists with pertinent information regarding on-going treatment and discharge planning. Physical therapists may use the GOAT in combination with other cognitive scales as a means of serial cognitive assessment to monitor the clinical course of neuropsychiatric disorders, including traumatic brain injuries. Reliability[edit | edit source] The inter-rater reliability of the total GOAT is r=0.98 with an agreement via Kappa coefficient of k=0.73. Reliability coefficient for individual items on the GOAT has a Kendall correlation coefficient of 0.99. The research for this is not recent and should be updated. The test-retest reliability coefficient for GOAT is predicted to have a low value. Validity[edit | edit source] Looking at construct validity using the Rasch mathematical analysis, researchers found that the constructed item hierarchy for the GOAT confirmed previous research, mostly that the focus should be on the person, that place and time comes before dealing with memories surrounding the injury. GOAT Scores correlated positively with the Glasgow Coma Scale (GSC) scores (r = 0.456, p < 0.002) as well as with the admission and discharge Functional Independence Measure (FIM) scores (r = 0.701 and 0.531, respectively). When looking at predictive validity, PTA, measured by GOAT scores, is a significant independent predictor of functional outcome (p = 0.00005) as assessed by the Disability Rating Scale (DRS) and FIM total, motor, and cognitive scores. The duration of PTA as assessed by GOAT was significantly associated with employment status at one-year post-injury. Performance on the GOAT was also associated with long-term outcome (at least 6 months post-injury) rated by the GOS (Glasgow Outcome Scale)(p < 0.0001). Advantages[edit | edit source] The advantage of GOAT is that it provides an objective rating of early cognitive recovery eliminating the need to utilize ambiguous terms such as “confused". Another advantage of the GOAT was from a study conducted by Bode et al. which suggested that a Rasch analysis demonstrated the use of the GOAT for assessing patients with a wide range of cognitive impairments given the items on the test represent a wide range of difficulties. Limitations[edit | edit source] In the GOAT, for items in which partial credit is used, Rasch analysis revealed step disorder. By organizing the response categories to a simple dichotomy (e.g. right versus wrong) it was shown to solve the disorder and allow the construction of an equal interval measure from the GOAT. With modifications in the item scoring of the GOAT, researchers were able to eliminate unreliable differentiation in responses, developing an equal-interval measure of PTA that displayed good reliability and validity. When looking at the GOAT, it is also important to incorporate an age correction factor in order to reduce the possibility of an age effect when comparing younger with older populations. Coma stimulation - Physiopedia Introduction Coma stimulation is also termed Sensory Stimulation or Basal Stimulation. Petra Potmesilova et al defined it as "a rehabilitation concept that works on the pedagogical-treatment principle and allows support for the perception, communication, and physical abilities of a person with any disability, regardless of its type and severity." It is a collaborative approach that refers to the application of a specific structured stimulus to comatose patients for a particular period thereby improving their level of consciousness and recovery.Coma-stimulation therapies have been administered to patients with decreased levels of consciousness or loss of memory for decades in rehabilitation settings to prevent sensory deprivation and promote recovery. They may be administered through any sensory modality, with tactile and auditory stimuli being the most common. The rationale for this class is that sensory stimulation may enhance neural processing, support neuroplasticity, and thus promote reemergence of consciousness.Sensory stimulation differs significantly in terms of duration, type of application, or mode of stimulation. The goal of this technique is the activation of the brain, improving the patient’s responsiveness, improving the transmission of the stimulus and fostering the overall recovery, and reducing the duration of recovery. The stimuli can be: Tactile Auditory Visual proprioceptive olfactory and gustatory. Mode of stimulation[edit | edit source] Sensory stimulation can be unimodal or multimodal, however, research suggests that sensory modalities are more effective when in concert with each other. Unimodal: Refers to the application of only one stimulus at a time. Multimodal: Refers to the application of more than one stimulus at a given point in time. Multimodal mode of stimulation has proven to be more effective in improving the level of consciousness when compared to unimodal. Theoretical Framework[edit | edit source] 1. Sensory Deprivation Theory Comatose patients experience sensory deprivation as the ability to respond to stimuli- internal or external is altered. This alteration further leads to an increase in the threshold of activation of the reticular activating system. As coma stimulation is a controlled stimulation it is assumed to meet the higher threshold of these reticular neurons thereby increasing the cortical activity and improving responsiveness. 2. Neural Plasticity; Neural plasticity is the ability of the nervous system to change continually by increasing dendritic branching and the number of dendrites. Damage to the nervous system catalyzes this increase in synaptogenesis. The application of stimulus during the period of neural regrowth is assumed to maximize the effect of plasticity. Therefore it is ideal to start coma stimulation as soon as the patient is medically stable and when the patient is closest to his time of injury. Method of application[edit | edit source] Ideal position: 30° propped up position. Visual- Administered by using a flashlight, bright-colored objects, a mirror, and pictures of various shapes and sizes. The patient is encouraged to track these objects. Auditory- Uses taped voice recordings of family and friends, favorite music, or sounds from nature Olfactory- Uses perfume, spices, or the aroma of food items. Gustatory- Spices, popsicles. Swabs of appropriate items can be touched on the patient’s tongue to stimulate the taste sensation. Tactile- Administered by rubbing different textures like satin, silk, fur, smooth metal, sandpaper, or cool or warm items over the patient's body surfaces. Proprioception- Passive range of motion for all joints. Duration of stimulation[edit | edit source] Varies from 20 minutes to 3 hours per day. Can be repeated twice a day. References[edit | edit source] ↑ Teasdale G, Jennett B. Assessment of coma and impaired consciousness: a practical scale. The Lancet. 1974 Jul 13;304(7872):81-4. ↑ Frej M, Frej J. The Glasgow Structured Approach to Assessment of the Glasgow Coma Scale: what is GCS - glasgow coma scale. (accessed 7 May 2017). ↑ Jump up to: 3.0 3.1 3.2 Middleton PM. Practical use of the Glasgow Coma Scale; a comprehensive narrative review of GCS methodology. Australasian Emergency Nursing Journal. 2012 Aug 1;15(3):170-83. ↑ Nell V, Yates DW, Kruger J. An extended Glasgow Coma Scale (GCS-E) with enhanced sensitivity to mild brain injury. Archives of physical medicine and rehabilitation. 2000 May 1;81(5):614-7. ↑ Holmes JF, Palchak MJ, MacFarlane T, Kuppermann N. Performance of the pediatric Glasgow Coma Scale in children with blunt head trauma. Academic emergency medicine 2005 Sep 1;12(9):814-9. PMID:16141014 (accessed 5 May 2017). ↑ Acker SN, Ross JT, Partrick DA, Nadlonek NA, Bronsert M, Bensard DD. Glasgow motor scale alone is equivalent to Glasgow Coma Scale at identifying children at risk for serious traumatic brain injury. Journal of trauma and acute care surgery. 2014 Aug 1;77(2):304-9. ↑ Jump up to: 7.0 7.1 Institute of Neurological Sciences NHS Greater Glasgow and Clyde. Glasgow Coma Scale: do it this way [Internet]. Sir Graham Teasdale; 2015 [cited 2017 May 7]. Available from: ↑ National Institute for Health and Care Excellence. Head Injury: assessment and early management [Internet]. (accessed 7 May 2017). ↑ GCS at 40. Glasgow Coma Scale at 40 | The new approach to Glasgow Coma Scale assessment. Available from: [last accessed 05/07/17] ↑ Zollman FS, editor. Manual of traumatic brain injury: Assessment and management. Springer Publishing Company; 2021 Jul 22. ↑ Gill M, Reiley D, Green S. Interrater reliability of Glasgow Coma Scale scores in the emergency department. Annals of Emergency Medicine 2004;43(2):215-223. (accessed 6 May 2017). ↑ Brott T, Adams H, Olinger C, Marler J, Barsan W, Biller J et al. Measurements of acute cerebral infarction: a clinical examination scale. Stroke 1989;20(7):864-870. PMID: 2749846 (accessed 6 May 2017). ↑ Reith F, Synnot A, van den Brande R, Gruen R, Maas A. Factors influencing the reliability of the Glasgow Coma Scale: a systematic review. Neurosurgery 2017;42:3-15. PMID: 28327922 (accessed 7 May 2017). ↑ Marion D, Carlier P. Problems with initial Glasgow Coma Scale assessment caused by prehospital treatment of patients with head injuries. The Journal of Trauma: Injury, Infection, and Critical Care 1994;36(1):89-95. PMID: 8295256 (accessed 6 May 2017). ↑ Meredith W, Rutledge R, Fakhry S, Emery S, Kromhout-Schiro S. The conundrum of the Glasgow Coma Scale in intubated patients. The Journal of Trauma: Injury, Infection, and Critical Care 1998;44(5):839-845. PMID: 9603086 (accessed 7 May 2017). ↑ Lesko M, Jenks T, Perel P, O'Brien S, Childs C, Bouamra O et al. Models of mortality probability in severe traumatic brain injury: results of the modeling by the UK Trauma Registry. Journal of Neurotrauma 2013;30(24):2021-2030. PMID:23865489 (accessed 6 May 2017). ↑ Grote S, Böcker W, Mutschler W, Bouillon B, Lefering R. Diagnostic value of the Glasgow Coma Scale for traumatic brain injury in 18,002 patients with severe multiple injuries. Journal of Neurotrauma 2011;28(4):527-534. PMID: 21265592 (accessed 6 May 2017). ↑ McNett M, Amato S, Gianakis A, Grimm D, Philippbar S, Belle J et al. The FOUR Score and GCS as predictors of outcome after traumatic brain injury. Neurocritical Care 2014;21(1):52-57. doi:10.1007/s12028-013-9947-6 (accessed 5 May 2017). ↑ McNett M. A review of the predictive ability of Glasgow Coma Scale scores in head-injured patients. Journal of Neuroscience Nursing 2007;39(2):68-75. PMID: 17477220 (accessed 6 May 2017). ↑ Jennett B. Assessment of outcome after severe brain damage: a practical scale. The Lancet 1975 ;305(7905):480-484. (accessed 7 May 2017). Retrieved from " Categories: Queen's University Neuromotor Function Project Assessment Outcome Measures Neurology Neurological - Assessment and Examination Neurological - Outcome Measures Primary Contact Screening Tools Acute Care Acquired Brain Injuries Cardiopulmonary Stroke Stroke - Assessment and Examination Stroke - Outcome Measures Cardiovascular System - Assessment and Examination Cardiovascular Disease - Outcome Measures Head Head - Assessment and Examination Head - Outcome Measures Course Pages Get Top Tips Tuesday and The Latest Physiopedia updates It's free, and you can unsubscribe any time. Privacy policy. Our Partners The content on or accessible through Physiopedia is for informational purposes only. Physiopedia is not a substitute for professional advice or expert medical services from a qualified healthcare provider. Read more Back to top suggested results
188837
https://mitsoul.org/courses/mit/course-18/18-100/
18.100A/B Real Analysis · students for open and universal learning soul- [x] manifestocoursesfaqblogdocsdonateteam 18.100A/B Real Analysis Catalog description: Covers fundamentals of mathematical analysis: convergence of sequences and series, continuity, differentiability, Riemann integral, sequences and series of functions, uniformity, interchange of limit operations. Shows the utility of abstract concepts and teaches understanding and construction of proofs. 18.100A: Proofs and definitions are less abstract than in 18.100B. Gives applications where possible. Concerned primarily with the real line. 18.100B: More demanding than 18.100A, for students with more mathematical maturity. Places more emphasis on point-set topology and n-space. Also see the math department’s subject overview for courses on analysis. Resources: 18.100B Spring 2024 in-class scribe lecture notes 18.100B Fall 2022 in-class scribe lecture notes (high-quality) 18.100A Fall 2020 OCW (lecture videos, homework, exams) 18.100A Fall 2018 class website (homework) 18.100B Fall 2012 class website (homework/solutions) 18.100B Fall 2011 class website (homework/solutions) 18.100B Fall 2010 OCW (homework/solutions from 2006) 18.100B Fall 2002 class website (homework/solutions)
188838
https://math.stackexchange.com/questions/3697852/result-about-lima%C3%A7on
polar coordinates - Result about limaçon - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Result about limaçon Ask Question Asked 5 years, 4 months ago Modified5 years, 4 months ago Viewed 230 times This question shows research effort; it is useful and clear 0 Save this question. Show activity on this post. Wikipedia's "Limaçon" entry gives the following information about the polar curve r=b+a cos(θ)r=b+a cos⁡(θ): When b>a b>a, the limaçon is a simple closed curve. However, the origin satisfies [the Cartesian translation of the equation], so the graph of this equation has an acnode or isolated point. When b>2 a b>2 a, the area bounded by the curve is convex, and when a<b<2 a a<b<2 a, the curve has an indentation bounded by two inflection points. At b=2 a b=2 a, the point (−a,0)(−a,0) is a point of 0 curvature. As b b is decreased relative to a a, the indentation becomes more pronounced until, at b=a b=a, the curve becomes a cardioid, and the indentation becomes a cusp. For 0<b<a 0<b<a, the cusp expands to an inner loop, and the curve crosses itself at the origin. As b b approaches 0 0, the loop fills up the outer curve and, in the limit, the limaçon becomes a circle traversed twice. I'm familiar with polar curves and some of its special cases,however these results about Limaçon is new to me and I don't know hoe to prove them (notice that I'm not asking about the formula of a Limaçon itself). polar-coordinates Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications edited May 30, 2020 at 9:19 user794034 user794034 asked May 30, 2020 at 6:41 user794034 user794034 8 I think we need to edit your Q, because in Wikipedia I can see r=b+a cos(θ)r=b+a cos⁡(θ)...Anton Vrdoljak –Anton Vrdoljak 2020-05-30 08:18:20 +00:00 Commented May 30, 2020 at 8:18 "How we can prove them? Where does that come from?" Just "graph and observe" would seem to cover much of this. (Thinking about the Cartesian graph of y=a+b cos x y=a+b cos⁡x also helps.) What exactly do you want proven? ... and what exactly is the "famous result" mentioned in the title?Blue –Blue 2020-05-30 08:24:00 +00:00 Commented May 30, 2020 at 8:24 @ Blue,good point,I've never even thought about that,so every problem in mathematics that seems true intuitively, should be accepted? it's like we say "well this is true, because this is what I'm seeing,so no proof is needed"user794034 –user794034 2020-05-30 08:27:59 +00:00 Commented May 30, 2020 at 8:27 @ blue,everything is clear,however if you think it's not,then feel free to edit it to make it possible to read.user794034 –user794034 2020-05-30 08:30:46 +00:00 Commented May 30, 2020 at 8:30 @number: Again, it would be helpful if you said exactly what you want proven. If you want proof that b>a(>0)b>a(>0) leads the polar graph of r=b+a cos θ r=b+a cos⁡θ (as Wikipedia writes it) to be a "simple closed curve"? Well, since cos θ cos⁡θ oscillates between −1−1 and 1 1, the value of b+a cos θ b+a cos⁡θ oscillates between b−a b−a and b+a b+a; when b>a>0 b>a>0, this value is strictly positive, so the polar graph is never in danger of self-intersecting (so, it's "simple"), and the periodic nature of cos θ cos⁡θ guarantees that the graph is a loop (hence "closed"). Is that what you want?Blue –Blue 2020-05-30 08:39:52 +00:00 Commented May 30, 2020 at 8:39 |Show 3 more comments 0 Sorted by: Reset to default You must log in to answer this question. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Related 1Domain of a Bounded Archimedian Spiral??? 0How to calculate shortest distance in polar coordinates when approaching a pole 3About polar coordinates in high dimensions 1How to find bounds of limaçon? 1Review of a proof for De Moivre's theorem using mathematical induction 1Understanding the graph of a limaçon 0Limacon/cardioid orientation confusion 2Equation of a stretched limaçon! 1Equation of a stretched-squeezed limaçon! 0Why does Spivak claim there's a definable tangent line through (0,0)(0,0) for the graph of the polar coordinates described by f(θ)=|cos(2 θ)|f(θ)=|cos⁡(2 θ)| Hot Network Questions Are there any world leaders who are/were good at chess? How exactly are random assignments of cases to US Federal Judges implemented? Who ensures randomness? Are there laws regulating how it should be done? How to home-make rubber feet stoppers for table legs? Direct train from Rotterdam to Lille Europe Bypassing C64's PETSCII to screen code mapping A time-travel short fiction where a graphologist falls in love with a girl for having read letters she has not yet written… to another man What can be said? Gluteus medius inactivity while riding Lingering odor presumably from bad chicken My dissertation is wrong, but I already defended. How to remedy? Xubuntu 24.04 - Libreoffice How do you emphasize the verb "to be" with do/does? I have a lot of PTO to take, which will make the deadline impossible If Israel is explicitly called God’s firstborn, how should Christians understand the place of the Church? Non-degeneracy of wedge product in cohomology What meal can come next? How long would it take for me to get all the items in Bongo Cat? Do sum of natural numbers and sum of their squares represent uniquely the summands? What NBA rule caused officials to reset the game clock to 0.3 seconds when a spectator caught the ball with 0.1 seconds left? Can a GeoTIFF have 2 separate NoData values? What's the expectation around asking to be invited to invitation-only workshops? Can I go in the edit mode and by pressing A select all, then press U for Smart UV Project for that table, After PBR texturing is done? Is direct sum of finite spectra cancellative? Copy command with cs names more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
188839
https://www.quora.com/How-can-you-find-all-integer-solutions-for-x-y-z-x-3-y-3-z-3-3
How to find all integer solutions for x+y+z=x^3+y^3 + z^3=3 - Quora Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In Mathematics Integral Solutions Linear Diophantine Equati... Integer Systems Arithmetic Number Theory System of Equations Gaussian Integer Solution... Diophantine Approximation Algebra 5 How can you find all integer solutions for x+y+z=x^3+y^3 + z^3=3? All related (37) Sort Recommended Rik Bos Ph.D. Mathematics from Utrecht University (Graduated 1979) · Author has 1.4K answers and 1.3M answer views ·2y Note x+y=3−z x+y=3−z divides x 3+y 3=3−z 3 x 3+y 3=3−z 3. Therefore z−3 z−3 divides z 3−3 z 3−3. Since z−3 z−3 also divides z 3−3 3 z 3−3 3 we see f:=z−3∣3 3−3=24 f:=z−3∣3 3−3=24. By symmetry, the same holds for d:=x−3 d:=x−3 and e:=y−3 e:=y−3. On the other hand, we have x 3=(d+3)3≡d 3(mod 9)x 3=(d+3)3≡d 3(mod 9) and since for every integer n n we have n 3≡0,±1(mod 9)n 3≡0,±1(mod 9) we conclude from x 3+y 3+z 3=3 x 3+y 3+z 3=3 that d 3≡1(mod 9)d 3≡1(mod 9) and the same holds for e 3 e 3 and f 3 f 3. Then d≡1,4,7(mod 9)d≡1,4,7(mod 9). Since d∣24 d∣24 we can write d=9 k+1,4,7 d=9 k+1,4,7 where −3≤k≤2−3≤k≤2 and it is straightforward to verify that only d=−8,−2,1,4 d=−8,−2,1,4 qualify as possible solutions. Moreover, d+e+f=−6 d+e+f=−6, so assuming d≤e≤d≤e≤ Continue Reading Note x+y=3−z x+y=3−z divides x 3+y 3=3−z 3 x 3+y 3=3−z 3. Therefore z−3 z−3 divides z 3−3 z 3−3. Since z−3 z−3 also divides z 3−3 3 z 3−3 3 we see f:=z−3∣3 3−3=24 f:=z−3∣3 3−3=24. By symmetry, the same holds for d:=x−3 d:=x−3 and e:=y−3 e:=y−3. On the other hand, we have x 3=(d+3)3≡d 3(mod 9)x 3=(d+3)3≡d 3(mod 9) and since for every integer n n we have n 3≡0,±1(mod 9)n 3≡0,±1(mod 9) we conclude from x 3+y 3+z 3=3 x 3+y 3+z 3=3 that d 3≡1(mod 9)d 3≡1(mod 9) and the same holds for e 3 e 3 and f 3 f 3. Then d≡1,4,7(mod 9)d≡1,4,7(mod 9). Since d∣24 d∣24 we can write d=9 k+1,4,7 d=9 k+1,4,7 where −3≤k≤2−3≤k≤2 and it is straightforward to verify that only d=−8,−2,1,4 d=−8,−2,1,4 qualify as possible solutions. Moreover, d+e+f=−6 d+e+f=−6, so assuming d≤e≤f d≤e≤f we see d≤−2 d≤−2 and it’s just a matter of verifying we have the solutions (d,e,f)=(−2,−2,−2),(−8,1,1),(−8,−2,4)(d,e,f)=(−2,−2,−2),(−8,1,1),(−8,−2,4). Then (x,y,z)=(1,1,1),(−5,4,4),(−5,1,7)(x,y,z)=(1,1,1),(−5,4,4),(−5,1,7). However, this last one fails to satisfy x 3+y 3+z 3=3 x 3+y 3+z 3=3. So apart from the order, the solutions are (1,1,1),(−5,4,4)(1,1,1),(−5,4,4). Upvote · 9 7 Sponsored by Grammarly 92% of professionals who use Grammarly say it has saved them time Work faster with AI, while ensuring your writing always makes the right impression. Download 999 210 Related questions More answers below x+y=1−z x+y=1−z, x 3+y 3=1−z 3 x 3+y 3=1−z 3. How can one solve for the integers x,y,z x,y,z? How do I find all integer solutions x, y, z of the equation (x−y−1)3+(y−z−2)3+(z−x+3)3=18(x−y−1)3+(y−z−2)3+(z−x+3)3=18? What are all the integer solutions (x,y)(x,y) to the equation x 3+y 3=(x+y)2 x 3+y 3=(x+y)2? Algebra: How much is known about the Diophantine equation x 3+y 3+z 3=3?x 3+y 3+z 3=3? If x:y=2:3 and y:z=6:5, then what is x:Y:Z? Sohel Zibara Studied at Doctor of Philosophy Degrees (Graduated 2000) · Author has 5.1K answers and 2.6M answer views ·2y x + y + z = 3 (x + y + z)³ = 27 x³ + y³ + z³ + 3(x + y)(y + z)(z + x) = 27 3 + 3(x + y)(y + z)(z + x) = 27 (x + y)(y + z)(z + x) = 8 Thus EITHER x + y = y + z = z + x = 2 which together with x + y + z = 3 yields x = y = z = 1 OR x + y = 8 , y + z = –1 and z + x = –1 which together with x + y + z = 3 yields x = y = 4 a... Upvote · 9 5 Mike Hirschhorn Honorary Associate Professor of Mathematics at UNSW · Author has 8.1K answers and 2.7M answer views ·2y x=1+a,y=1+b,z=1+c,x=1+a,y=1+b,z=1+c, a+b+c=0,3 a 2+a 3+3 b 2+b 3+3 c 2+c 3=0,a+b+c=0,3 a 2+a 3+3 b 2+b 3+3 c 2+c 3=0, 3 a 2+a 3+3 b 2+b 3+3(a+b)2−(a+b)3=0,3 a 2+a 3+3 b 2+b 3+3(a+b)2−(a+b)3=0, 6 a 2+6 a b+6 b 2–3 a b(a+b)=0,6 a 2+6 a b+6 b 2–3 a b(a+b)=0, 2(a 2+a b+b 2)=a b(a+b),2(a 2+a b+b 2)=a b(a+b), (2−a)b 2+(2 a−a 2)b+2 a 2=0,(2−a)b 2+(2 a−a 2)b+2 a 2=0, Δ=a 2(2−a)2–8 a 2(2−a)=a 4+4 a 3–12 a 2=a 2(a 2+4 a−12).Δ=a 2(2−a)2–8 a 2(2−a)=a 4+4 a 3–12 a 2=a 2(a 2+4 a−12). we require a 2+4 a−12=□,we require a 2+4 a−12=◻, (a+2)2–16=□,(a+2)2–16=◻, a=2(no!),3,−6 or−7(no!),a=2(no!),3,−6 or−7(no!), (a,b,c)=(3,3,−6),(3,−6,3),or(−6,3,3),(a,b,c)=(3,3,−6),(3,−6,3),or(−6,3,3), (x,y,z)=(4,4,−5)etc.(x,y,z)=(4,4,−5)etc. Upvote · 9 6 9 3 Philip Lloyd Specialist Calculus Teacher, Motivator and Baroque Trumpet Soloist. · Author has 6.8K answers and 52.8M answer views ·2y Here are the 4 possible points on the very unus... Upvote · 9 7 Sponsored by Hudson Financial Partners Do I need someone to manage my wealth? If your net worth is high wealth management might be for you, let's schedule a quick call and find out! Contact Us 9 1 Related questions More answers below What is the method for finding all solutions to x 3+y 3=z 2 x 3+y 3=z 2 for integers (x,y,z)(x,y,z)? What are the integer solutions of 1/x+1/y+1/z=1? How does one find all integer solutions to y^2=x^5-7? How many integer solutions does the equation x 3+y 3+z 3=x y z x 3+y 3+z 3=x y z have? If x:z=3:2 and y:z=1:2, then what is x:y:z? John K WilliamsSon BASIC programming expert teaching myself PYTHON !! at Home Office, Retired (2018–present) · Author has 9K answers and 23.4M answer views ·2y This puzzle is an example of when an easy-to-write computer program can quickly find all solutions to an equation, which you can see and execute here: We can safely assume that all values for x, y and z will be in the range (-9 to 9) because the cubes rapidly become huge as the absolute value of these numbers increase. But since Python is a reasonably fast language, we can play it safe by using a larger range. We can generate every possible x value and every possible y value in this range with nested loops, then calculate z = 3 - x - y Finally, all we need to do Continue Reading This puzzle is an example of when an easy-to-write computer program can quickly find all solutions to an equation, which you can see and execute here: We can safely assume that all values for x, y and z will be in the range (-9 to 9) because the cubes rapidly become huge as the absolute value of these numbers increase. But since Python is a reasonably fast language, we can play it safe by using a larger range. We can generate every possible x value and every possible y value in this range with nested loops, then calculate z = 3 - x - y Finally, all we need to do is check to see if the sum of the cubes is equal to 3, and if so, print the values. print("Find all integer solutions for x+y+z = x³+y³+z³ = 3") print() for x in range(-999,1000): for y in range(-999,1000): z=3-x-y if x3 + y3 + z3 == 3: print(x,y,z) When we run this program, we get four answers, the trivial 1+1+1=1³+1³+1³=3 solution, and three solutions with two fours and a negative five in all possible arrangements. If you want to actually run this program, click here: . . Do you appreciate the work I put into my many answers? If you decide to get a 30-day free trial of Quora+, please usemy link(I might make a commission). If you do, Thank You for rewarding me for my efforts! BONUS: You get easier access to all of my answers! (But don’t worry, if you don’t, I’ll still post free copies of my answers as comments.) . Deans and univ. math professors who admire my work are invited to click here. to read my Question for University Mathematics Departments . Quora . com… Thanks. Upvote · Aman Raj Well acquainted with numbers... · Author has 308 answers and 1.1M answer views ·3y Related What are all the integer solutions (x,y)(x,y) to the equation x 3+y 3=(x+y)2 x 3+y 3=(x+y)2? Good question. Let us use the identity a 3+b 3=(a+b)(a 2+b 2−a b)a 3+b 3=(a+b)(a 2+b 2−a b) Thus, we can transform the equation as ⟹(x+y)(x 2+y 2−x y)=(x+y)2...(1)⟹(x+y)(x 2+y 2−x y)=(x+y)2...(1) Equation (1) forces that either x+y=0 x+y=0 or x 2+y 2−x y=x+y...(2)x 2+y 2−x y=x+y...(2) Let us consider first one. This shows that (x,−x)(x,−x) is a solution of the equation for all integers x x. We now consider the second equation. It can be written as ⟹x 2−(y+1)x+y 2−y=0...(3)⟹x 2−(y+1)x+y 2−y=0...(3) Equation (3) can be viewed as a quadratic in x x. Thus, for integer x,x, the above quadratic must have integer roots. This means that the discriminant must be a perfect square. Thus, ⟹(y+1)2−4(y 2−y)⟹(y+1)2−4(y 2−y) is Continue Reading Good question. Let us use the identity a 3+b 3=(a+b)(a 2+b 2−a b)a 3+b 3=(a+b)(a 2+b 2−a b) Thus, we can transform the equation as ⟹(x+y)(x 2+y 2−x y)=(x+y)2...(1)⟹(x+y)(x 2+y 2−x y)=(x+y)2...(1) Equation (1) forces that either x+y=0 x+y=0 or x 2+y 2−x y=x+y...(2)x 2+y 2−x y=x+y...(2) Let us consider first one. This shows that (x,−x)(x,−x) is a solution of the equation for all integers x x. We now consider the second equation. It can be written as ⟹x 2−(y+1)x+y 2−y=0...(3)⟹x 2−(y+1)x+y 2−y=0...(3) Equation (3) can be viewed as a quadratic in x x. Thus, for integer x,x, the above quadratic must have integer roots. This means that the discriminant must be a perfect square. Thus, ⟹(y+1)2−4(y 2−y)⟹(y+1)2−4(y 2−y) is a perfect square. ⟹1+6 y−3 y 2⟹1+6 y−3 y 2 is a perfect square. However, this forces that 1+6 y−3 y 2⩾0 1+6 y−3 y 2⩾0 Solving of the above inequality shows that for integer y y, the only permissible values are y=0,1,2 y=0,1,2 only. (A) If y=0,y=0, then equation (3) gives ⟹x 2−x=0⟹x=0,1⟹x 2−x=0⟹x=0,1 Thus we get two pairs namely (x,y)=(0,0),(1,0)(x,y)=(0,0),(1,0) (B) If y=1,y=1,then equation (3) gives ⟹x 2−2 x=0⟹x=0,2⟹x 2−2 x=0⟹x=0,2 Thus we get two pairs namely (x,y)=(0,1),(2,1)(x,y)=(0,1),(2,1) (C) If y=2,y=2, then equation (3) gives ⟹x 2−3 x+2=0⟹x=1,2⟹x 2−3 x+2=0⟹x=1,2 Thus we get two pairs namely (x,y)=(1,2),(2,2)(x,y)=(1,2),(2,2) Thus combining all the information gathered above, the only possible solutions of the equation are: (x,y)=(0,0),(1,0),(0,1),(2,1),(1,2),(2,2),(x,−x)(x,y)=(0,0),(1,0),(0,1),(2,1),(1,2),(2,2),(x,−x) where x x is an integer. Thanks for reading! Your response is private Was this worth your time? This helps us sort answers on the page. Absolutely not Definitely yes Upvote · 99 17 9 2 Sponsored by JetBrains DataGrip, a powerful GUI tool for SQL. Smart code completion, on-the-fly analysis, quick-fixes, refactorings that work in SQL files, and more. Download 999 740 Alon Amit PhD in Mathematics; Mathcircler. · Upvoted by Krogi Us , M.Sc. Mathematics & Computer Science, Aalto University (1988) and Yair Livne , Master's Mathematics, Hebrew University of Jerusalem (2007) · Author has 8.8K answers and 173.8M answer views ·6y Related What are all the integer solutions (x,y)(x,y) to the equation x 3+y 3=(x+y)2 x 3+y 3=(x+y)2? Since x 3+y 3=(x+y)(x 2−x y+y 2)x 3+y 3=(x+y)(x 2−x y+y 2), the equation has an infinite family of obvious solutions y=−x y=−x. Excluding those, we can divide both sides by x+y x+y and obtain the quadratic equation x 2−x y+y 2−x−y=0 x 2−x y+y 2−x−y=0 Geometrically, this is an ellipse. Therefore, enumerating the solutions in integers is simply a matter of checking which lattice points lie on this ellipse. In other words, we easily prove that no integer solutions exist when x≥3 x≥3, y≥3 y≥3, x≤−1 x≤−1 or y≤−1 y≤−1 simply because no real solutions exist beyond those limits. The integer points are clearly (0,0)(0,0), (1,0)(1,0), (0,1)(0,1), (1,2)(1,2), (2,1)(2,1) and (2,2)(2,2), so Continue Reading Since x 3+y 3=(x+y)(x 2−x y+y 2)x 3+y 3=(x+y)(x 2−x y+y 2), the equation has an infinite family of obvious solutions y=−x y=−x. Excluding those, we can divide both sides by x+y x+y and obtain the quadratic equation x 2−x y+y 2−x−y=0 x 2−x y+y 2−x−y=0 Geometrically, this is an ellipse. Therefore, enumerating the solutions in integers is simply a matter of checking which lattice points lie on this ellipse. In other words, we easily prove that no integer solutions exist when x≥3 x≥3, y≥3 y≥3, x≤−1 x≤−1 or y≤−1 y≤−1 simply because no real solutions exist beyond those limits. The integer points are clearly (0,0)(0,0), (1,0)(1,0), (0,1)(0,1), (1,2)(1,2), (2,1)(2,1) and (2,2)(2,2), so those six solutions (along with the infinite family y=−x)y=−x) are all the integer solutions of the original equation. Upvote · 999 589 99 13 99 19 David Ash undergrad degree in math, Stanford PhD in computer science · Upvoted by Bernard Montaron , PhD Mathematics & Discrete Mathematics, Université Pierre Et Marie Curie Paris VI (1980) · Author has 1.7K answers and 7.5M answer views ·Jul 30 Related How can you find all integer solutions to the system x 3+y 3+z 3=1000 x 3+y 3+z 3=1000 and x+y+z=9190 x+y+z=9190 with x≥y≥z x≥y≥z? Let S=x+y S=x+y. Then x 3+y 3=S 3–3 x y S x 3+y 3=S 3–3 x y S. Also z 3=(9190−S)3 z 3=(9190−S)3. So S 3–3 x y S+(9190−S)3=1000 S 3–3 x y S+(9190−S)3=1000. This is a quadratic in S S (the cubic terms cancel) so, by the Rational Root Theorem, S S must divide the constant term of 9190 3–1000=776151558000 9190 3–1000=776151558000. Now 776151558000=2 4⋅3 4⋅5 3⋅7⋅13⋅17⋅19⋅163 776151558000=2 4⋅3 4⋅5 3⋅7⋅13⋅17⋅19⋅163, which will have exactly 5⋅5⋅4⋅2 5=3200 5⋅5⋅4⋅2 5=3200 positive divisors. We do not need to worry about negative divisors, since if x+y x+y were negative, the condition x≥y≥z x≥y≥z would imply z z negative and hence x+y+z x+y+z negative, but we are told that x+y+z=9190 x+y+z=9190. Given this defined number of diviso Continue Reading Let S=x+y S=x+y. Then x 3+y 3=S 3–3 x y S x 3+y 3=S 3–3 x y S. Also z 3=(9190−S)3 z 3=(9190−S)3. So S 3–3 x y S+(9190−S)3=1000 S 3–3 x y S+(9190−S)3=1000. This is a quadratic in S S (the cubic terms cancel) so, by the Rational Root Theorem, S S must divide the constant term of 9190 3–1000=776151558000 9190 3–1000=776151558000. Now 776151558000=2 4⋅3 4⋅5 3⋅7⋅13⋅17⋅19⋅163 776151558000=2 4⋅3 4⋅5 3⋅7⋅13⋅17⋅19⋅163, which will have exactly 5⋅5⋅4⋅2 5=3200 5⋅5⋅4⋅2 5=3200 positive divisors. We do not need to worry about negative divisors, since if x+y x+y were negative, the condition x≥y≥z x≥y≥z would imply z z negative and hence x+y+z x+y+z negative, but we are told that x+y+z=9190 x+y+z=9190. Given this defined number of divisors, we simply need to try each one. Given a value for x+y x+y, we can use S 3–3 x y S+(9190−S)3=1000 S 3–3 x y S+(9190−S)3=1000 to solve for x y x y and, given the values of x+y x+y and x y x y we can easily solve for x x and y y individually to see if they turn out to be integers. This should allow us to solve for all integer solutions in reasonable time. After implementing the Python code, the possible solutions for (x,y,z)(x,y,z) are (43315,9334,−43459)(43315,9334,−43459), (19830,10005,−20645)(19830,10005,−20645), (17380,10330,−18520)(17380,10330,−18520), (13754,11440,−16004)(13754,11440,−16004), (18970,10100,−19880)(18970,10100,−19880) and (14895,10945,−16650)(14895,10945,−16650). Upvote · 9 2 Sponsored by Google Ads Google AI finds ready-to-buy shoppers this peak season. Turn browsing into sales this peak season with an AI-powered campaign from Google Ads. Sign Up 9 7 Abdelhadi Nakhal Hydrogeologist engineer · Author has 1.3K answers and 435.8K answer views ·11mo Related How do I find all (x,y,z)∈Z,(x,y,z)∈Z, such that x 3+y 3=3 z x 3+y 3=3 z? We have: (x+y)(x 2−x y+y 2)=3 z(x+y)(x 2−x y+y 2)=3 z We will begin by studying the particular cases: −(z=0)⟹{x+y=1 x 2−x y+y 2=(x+y)2−3 x y=1−(z=0)⟹{x+y=1 x 2−x y+y 2=(x+y)2−3 x y=1 (x,y,z)=(1,0,0),(0,1,0)(x,y,z)=(1,0,0),(0,1,0) −(Case(z=2):−(Case(z=2): {x+y=1 x 2−x y+y 2=9{x+y=1 x 2−x y+y 2=9 This imply:(x+y)2−3 x y=9⟹−3 x y=8 impossible!This imply:(x+y)2−3 x y=9⟹−3 x y=8 impossible! {x+y=3 x 2−x y+y 2=3{x+y=3 x 2−x y+y 2=3 This imply:9−3 x y=3⟹3 x y=6 This imply:9−3 x y=3⟹3 x y=6 Hence: (x,y,z)=(1,6,2);(2,3,6)(x,y,z)=(1,6,2);(2,3,6) {x+y=9 x 2−x y+y 2=1{x+y=9 x 2−x y+y 2=1 This imply:(x+y)2−3 x y=1 This imply:(x+y)2−3 x y=1 ⟹3(3 3−x y)=1 impossible!⟹3(3 3−x y)=1 impossible! −Case(z=3):−Case(z=3): According to Continue Reading We have: (x+y)(x 2−x y+y 2)=3 z(x+y)(x 2−x y+y 2)=3 z We will begin by studying the particular cases: −(z=0)⟹{x+y=1 x 2−x y+y 2=(x+y)2−3 x y=1−(z=0)⟹{x+y=1 x 2−x y+y 2=(x+y)2−3 x y=1 (x,y,z)=(1,0,0),(0,1,0)(x,y,z)=(1,0,0),(0,1,0) −(Case(z=2):−(Case(z=2): {x+y=1 x 2−x y+y 2=9{x+y=1 x 2−x y+y 2=9 This imply:(x+y)2−3 x y=9⟹−3 x y=8 impossible!This imply:(x+y)2−3 x y=9⟹−3 x y=8 impossible! {x+y=3 x 2−x y+y 2=3{x+y=3 x 2−x y+y 2=3 This imply:9−3 x y=3⟹3 x y=6 This imply:9−3 x y=3⟹3 x y=6 Hence: (x,y,z)=(1,6,2);(2,3,6)(x,y,z)=(1,6,2);(2,3,6) {x+y=9 x 2−x y+y 2=1{x+y=9 x 2−x y+y 2=1 This imply:(x+y)2−3 x y=1 This imply:(x+y)2−3 x y=1 ⟹3(3 3−x y)=1 impossible!⟹3(3 3−x y)=1 impossible! −Case(z=3):−Case(z=3): According to Fermat last theorem this case yields no solution. −Case:z≥4−Case:z≥4 It is easy to prove that: x+y=1 is impossible because we get:x+y=1 is impossible because we get: 1−3 x y=3 z impossible!1−3 x y=3 z impossible! We prove also that:x 2−x y+z 2=1 is impossible!We prove also that:x 2−x y+z 2=1 is impossible! Pose: {x+y=3 a x 2−x y+y 2=3 b(for some positif integer a,b grater than 2){x+y=3 a x 2−x y+y 2=3 b(for some positif integer a,b grater than 2) From which we deduce: {x+y=3 a x y=3 2 a−1−3 b−1{x+y=3 a x y=3 2 a−1−3 b−1 Hence:x,y are the roots of the quadratic:Hence:x,y are the roots of the quadratic: s 2−3 a s−(3 b−1−3 2 a−1)s 2−3 a s−(3 b−1−3 2 a−1) The discriminant must be a perfect square: δ 2=3 2 a+4(3 b−1−3 2 a−1)=3 b+3 b−1−3 2 a−1≥0 δ 2=3 2 a+4(3 b−1−3 2 a−1)=3 b+3 b−1−3 2 a−1≥0 This is possible only when b≥2 a−1 This is possible only when b≥2 a−1 Case:(b=2 a−1)⟹δ 2=3 b−1=3 2 a−2.Case:(b=2 a−1)⟹δ 2=3 b−1=3 2 a−2. This imply: x=3 a−3 2 a−1 2 and y=3 a+3 2 a−1 2 x=3 a−3 2 a−1 2 and y=3 a+3 2 a−1 2 Case(b>2 a−1)Case(b>2 a−1) This imply: δ 2=3 2 a−1(3 b−2 a+1−3 b−2 a−1)δ 2=3 2 a−1(3 b−2 a+1−3 b−2 a−1) gcd(3 2 a−1,(3 b−2 a+1−3 b−2 a−1)=1 gcd(3 2 a−1,(3 b−2 a+1−3 b−2 a−1)=1 Hence: Both 3 2 a−1 and(3 b−2 a+1−3 b−2 a−1.Hence: Both 3 2 a−1 and(3 b−2 a+1−3 b−2 a−1. must be a perfect squares.} Contradiction because 3 2 a−1 is not a perfect square((2 a−1)is odd!)Contradiction because 3 2 a−1 is not a perfect square((2 a−1)is odd!) Upvote · 9 1 Bernard Montaron PhD in Mathematics&Discrete Mathematics, Université Pierre Et Marie Curie Paris VI (Graduated 1980) · Author has 3.2K answers and 2.1M answer views ·Feb 6 Related Can you find 12 integer solutions of x 3+y 3+z 3=2025 x 3+y 3+z 3=2025 with |x|≥|y|≥|z||x|≥|y|≥|z|? Non-negative integer solutions of equation x 3+y 3+z 3=n>0 x 3+y 3+z 3=n>0 with x≥y≥z x≥y≥z are in finite number and easy to find – if they exist – because 0≤z≤3√n/3 0≤z≤n/3 3 and given z z in that interval, z≤y 3√(n−z 3)/2 z≤y(n−z 3)/2 3. There are no such solutions for n=2025 n=2025. Using congruences modulo 9, it’s easy to prove that this equation has no integer solution if n=±4+9 k n=±4+9 k. 2025 is not of that form, so we expect solutions with at least one negative integer. Ignoring the condition |x|≥|y|≥|z||x|≥|y|≥|z| (which is there to avoid repeating permuted solutions), let’s fix the value of z∈Z z∈Z. We’ll find Continue Reading Non-negative integer solutions of equation x 3+y 3+z 3=n>0 x 3+y 3+z 3=n>0 with x≥y≥z x≥y≥z are in finite number and easy to find – if they exist – because 0≤z≤3√n/3 0≤z≤n/3 3 and given z z in that interval, z≤y 3√(n−z 3)/2 z≤y(n−z 3)/2 3. There are no such solutions for n=2025 n=2025. Using congruences modulo 9, it’s easy to prove that this equation has no integer solution if n=±4+9 k n=±4+9 k. 2025 is not of that form, so we expect solutions with at least one negative integer. Ignoring the condition |x|≥|y|≥|z||x|≥|y|≥|z| (which is there to avoid repeating permuted solutions), let’s fix the value of z∈Z z∈Z. We’ll find the best algorithm to solve x 3+y 3=n−z 3 x 3+y 3=n−z 3. Among the 3 sums x+y,x+z,y+z x+y,x+z,y+z at least one is positive, say it’s x+y=s>0 x+y=s>0, and z 3≤n z 3≤n. The equation becomes s(s 2−3 x y)=n−z 3 s(s 2−3 x y)=n−z 3 with s∣n−z 3 s∣n−z 3 and s 2−3 x y=n−z 3 s s 2−3 x y=n−z 3 s, s 2−4 x y=(x−y)2 s 2−4 x y=(x−y)2 We eliminate the product x y x y by combining the two equations to obtain (x−y)2=4(n−z 3)−s 3 3 s⟹0<s≤3√4(n−z 3)(x−y)2=4(n−z 3)−s 3 3 s⟹0<s≤4(n−z 3)3 As we can see, the choice of s s is tightly constrained. It’s a positive divisor of n−z 3 n−z 3 which must be smaller than 3√4(n−z 3)4(n−z 3)3 and must yield a perfect square to obtain x−y=√4(n−z 3)−s 3 3 s,x+y=s x−y=4(n−z 3)−s 3 3 s,x+y=s This gives the solution(s) x,y x,y. Next, we can use another value of z z. Since x+y x+y and x 3+y 3 x 3+y 3 have the same sign (positive), it’s sufficient to start from z 0=⌊n 1/3⌋z 0=⌊n 1/3⌋ and to loop by decreasing z z by 1 at each loop. Solutions with increasing absolute values are obtained that way. Here are 10 solutions obtained in a few minutes on a laptop PC with a simple Excel VB macro: (x,y,z)=(x,y,z)= (16, -12, -7), (35, -33, -17), (-82, 81, 28), (274, -231,- 202), (492, -455, -292), (5496, -5071, -3290), (-14468, 14457, 1904), (17868, -16007, -11704), (-181597, 181293, 31081), (-756168, 755617, 98114) To these one can use the only known solution for n=75 n=75 and multiply the three numbers by 3 to obtain the 11-th solution (-1305609693, 1305609249, 13143477). It now remains to find a 12-th solution between (-756168, 755617, 98114) and (-1305609693, 1305609249, 13143477). Anyone interested ? Upvote · 9 4 Ramakrishnan Parthasarathy Forgot more Math than I once knew in a non-tech career! · Author has 2.5K answers and 6.3M answer views ·11mo Related How do I find all (x,y,z)∈Z,(x,y,z)∈Z, such that x 3+y 3=3 z x 3+y 3=3 z? The only solutions are [x,y,z]=[x,y,z]=, a∈Z a∈Z, and [x,y,z]=[x,y,z]=, k≥0 k≥0 In x 3+y 3=3 z x 3+y 3=3 z, x=y x=y is not a solution because the L H S L H S would be even while the R H S R H S is odd One of [x,y][x,y] has to be odd and the other even because the R H S R H S is odd x=0⟹y 3=3 z x=0⟹y 3=3 z If y=3 a y=3 a, a∈Z a∈Z, 3 3 a=3 z⟹z=3 a 3 3 a=3 z⟹z=3 a is a solution Similarly, if x=3 a x=3 a, z=3 a z=3 a is a solution for y=0 y=0 For [x,y]≠0[x,y]≠0, [x,y,z]>0[x,y,z]>0 Cubes ≡−1,0,1≡−1,0,1 while the R H S≡0(mod 3)R H S≡0(mod 3) The only way we'd get an ≡0(mod 3)≡0(mod 3) from the L H S L H S is when both [x[x Continue Reading The only solutions are [x,y,z]=[x,y,z]=, a∈Z a∈Z, and [x,y,z]=[x,y,z]=, k≥0 k≥0 In x 3+y 3=3 z x 3+y 3=3 z, x=y x=y is not a solution because the L H S L H S would be even while the R H S R H S is odd One of [x,y][x,y] has to be odd and the other even because the R H S R H S is odd x=0⟹y 3=3 z x=0⟹y 3=3 z If y=3 a y=3 a, a∈Z a∈Z, 3 3 a=3 z⟹z=3 a 3 3 a=3 z⟹z=3 a is a solution Similarly, if x=3 a x=3 a, z=3 a z=3 a is a solution for y=0 y=0 For [x,y]≠0[x,y]≠0, [x,y,z]>0[x,y,z]>0 Cubes ≡−1,0,1≡−1,0,1 while the R H S≡0(mod 3)R H S≡0(mod 3) The only way we'd get an ≡0(mod 3)≡0(mod 3) from the L H S L H S is when both [x,y]≡0(mod 3)[x,y]≡0(mod 3) or when [x 3,y 3]≡[−1,1],[1,−1][x 3,y 3]≡[−1,1],[1,−1] (i) If [x,y]=[3 m,3 n]≡0(mod 3)[x,y]=[3 m,3 n]≡0(mod 3), m,n≥1 m,n≥1 m 3+n 3=3 z−3 m 3+n 3=3 z−3 For [m,n]≡0(mod 3)[m,n]≡0(mod 3) and subsequent 0(mod 3)0(mod 3) components, things would go on until the R H S R H S is exhausted of the integer factors of 3 3, so z−3 k=2 z−3 k=2 (for k k repetitions) provides for a 3 2=9 3 2=9 factor, which could be satisfied by [p,q]=[p,q]=, where [m,n]=[3 k−1.p,3 k−1.q][m,n]=[3 k−1.p,3 k−1.q]. Generically, [x,y]=[x,y]= works for z=3 k+2 z=3 k+2, k≥0 k≥0. See (ii) for [m 3,n 3]≡[−1,1],[1,−1][m 3,n 3]≡[−1,1],[1,−1], m,n≥1 m,n≥1, which doesn’t work. Note that k=0⟹[x,y]=k=0⟹[x,y]= subsumes the x=3 m+1 x=3 m+1, m=0 m=0 and y=3 n−1 y=3 n−1, n=1 n=1 and the x=3 m−1 x=3 m−1, m=1 m=1 and y=3 n+1 y=3 n+1, n=0 n=0 cases. (ii)x=3 m+1 x=3 m+1 and y=3 n−1 y=3 n−1 or vice versa, m,n≥1 m,n≥1 (3 m+1)3+(3 n−1)3=3 z(3 m+1)3+(3 n−1)3=3 z ⟹9(m+n)(3 m 2−3 m n+3 m+3 n 2−3 n+1)=3 z⟹9(m+n)(3 m 2−3 m n+3 m+3 n 2−3 n+1)=3 z Because one of [x,y][x,y] is odd and the other even, one of [m,n][m,n] is odd and the other even 3 m 2+3 m+3 n 2−3 n−3 m n+1=3 m(m+1)+3 n(n−1)−3 m n+1 3 m 2+3 m+3 n 2−3 n−3 m n+1=3 m(m+1)+3 n(n−1)−3 m n+1 is an odd number ≠1≠1 that is co-prime to 3 3 and hence 3 z 3 z, which means that there are no other solutions. Upvote · 9 1 Bernard Montaron PhD in Mathematics&Discrete Mathematics, Université Pierre Et Marie Curie Paris VI (Graduated 1980) · Upvoted by Nathan Hannon , Ph. D. Mathematics, University of California, Davis (2021) and Alon Amit , Lover of math. Also, Ph.D. · Author has 3.2K answers and 2.1M answer views ·10mo Related Does x^3 + y^3 + z^3 = t^3 have non-trivial integer solutions? Yes, it has an infinity of non-trivial integer solutions. We can also prove that it has an infinity of non-trivial solutions with x,y,z,t x,y,z,t having no common factor. The well-known identity (9 t 4)3+(3 t−9 t 4)3+(1–9 t 3)3=1 3(9 t 4)3+(3 t−9 t 4)3+(1–9 t 3)3=1 3 proves it! There is also an infinity of non-trivial solutions in positive integers, such as the nice examples 3 3+4 3+5 3=6 3 3 3+4 3+5 3=6 3 and 1 3+6 3+8 3=9 3 1 3+6 3+8 3=9 3 How do you prove this? Well, you can - for example - use the identity (9 p 4)3+(3 p q 3−9 p 4)3+(q 4–9 q p 3)3=(q 4)3(9 p 4)3+(3 p q 3−9 p 4)3+(q 4–9 q p 3)3=(q 4)3 with q>3 2/3 p q>3 2/3 p, and (p,q)=1(p,q)=1 (coprime) and q≠0 mod 3 q≠0 mod 3. Or the identity (p 4)3+(9 p q 3−p 4)3+(9 q 4–3 q p 3)3=(9 q 4)3(p 4)3+(9 p q 3−p 4)3+(9 q 4–3 q p 3)3=(9 q 4)3 with q q Continue Reading Yes, it has an infinity of non-trivial integer solutions. We can also prove that it has an infinity of non-trivial solutions with x,y,z,t x,y,z,t having no common factor. The well-known identity (9 t 4)3+(3 t−9 t 4)3+(1–9 t 3)3=1 3(9 t 4)3+(3 t−9 t 4)3+(1–9 t 3)3=1 3 proves it! There is also an infinity of non-trivial solutions in positive integers, such as the nice examples 3 3+4 3+5 3=6 3 3 3+4 3+5 3=6 3 and 1 3+6 3+8 3=9 3 1 3+6 3+8 3=9 3 How do you prove this? Well, you can - for example - use the identity (9 p 4)3+(3 p q 3−9 p 4)3+(q 4–9 q p 3)3=(q 4)3(9 p 4)3+(3 p q 3−9 p 4)3+(q 4–9 q p 3)3=(q 4)3 with q>3 2/3 p q>3 2/3 p, and (p,q)=1(p,q)=1 (coprime) and q≠0 mod 3 q≠0 mod 3. Or the identity (p 4)3+(9 p q 3−p 4)3+(9 q 4–3 q p 3)3=(9 q 4)3(p 4)3+(9 p q 3−p 4)3+(9 q 4–3 q p 3)3=(9 q 4)3 with q>3−1/3 p q>3−1/3 p, and (p,q)=1(p,q)=1 (coprime) and p≠0 mod 3 p≠0 mod 3. Upvote · 99 15 Rik Bos Ph.D. Mathematics from Utrecht University (Graduated 1979) · Upvoted by Horst H. von Brand , PhD Computer Science & Mathematics, Louisiana State University (1987) and Bernard Montaron , PhD Mathematics & Discrete Mathematics, Université Pierre Et Marie Curie Paris VI (1980) · Author has 1.4K answers and 1.3M answer views ·1y Related Can you find all integer solutions of x 4+3 y 4=z 2 x 4+3 y 4=z 2? Given any integral solution (x,y,z)(x,y,z) of the equation x 4+3 y 4=z 2(1)(1)x 4+3 y 4=z 2 a prime divisor p p of both x x and y y will give a “reduced” solution (x/p,y/p,z/p 2)(x/p,y/p,z/p 2), and conversely any solution (x,y,z)(x,y,z) will give a new solution (k x,k y,k 2 z)(k x,k y,k 2 z), where k k is any integer. So the real problem is to find primitive solutions, i.e. solutions where gcd(x,y)=1 gcd(x,y)=1. Moreover, given a solution (x,y,z)(x,y,z), we may change the sign of any of the coordinates and still keep a solution, so we may as well assume x,y,z≥0 x,y,z≥0. In fact, we can assume x,y,z>0 x,y,z>0, since the only primitive solutions with a zero coordinate are (0,0,0)(0,0,0) and (1,0,1)(1,0,1). Continue Reading Given any integral solution (x,y,z)(x,y,z) of the equation x 4+3 y 4=z 2(1)(1)x 4+3 y 4=z 2 a prime divisor p p of both x x and y y will give a “reduced” solution (x/p,y/p,z/p 2)(x/p,y/p,z/p 2), and conversely any solution (x,y,z)(x,y,z) will give a new solution (k x,k y,k 2 z)(k x,k y,k 2 z), where k k is any integer. So the real problem is to find primitive solutions, i.e. solutions where gcd(x,y)=1 gcd(x,y)=1. Moreover, given a solution (x,y,z)(x,y,z), we may change the sign of any of the coordinates and still keep a solution, so we may as well assume x,y,z≥0 x,y,z≥0. In fact, we can assume x,y,z>0 x,y,z>0, since the only primitive solutions with a zero coordinate are (0,0,0)(0,0,0) and (1,0,1)(1,0,1). Here are a few solutions. (x,y,z)=(1,1,2)(x,y,z)=(1,1,2) and (1,2,7)(1,2,7) are easy to find, but less easy are (x,y,z)=(11,3,122)(x,y,z)=(11,3,122), (47,28,2593)(47,28,2593) and (13,475,390794)(13,475,390794). To find all solutions (positive and primitive), we will make use of elliptic curves, though at the end we will mention a more elementary approach that is also more restricted. The elliptic curve approach We can convert our quartic equation to an elliptic curve as follows. Multiplying both sides of the equation by x 2/y 6 x 2/y 6 we obtain x 6/y 6+3 x 2/y 2=x 2 z 2/y 6 x 6/y 6+3 x 2/y 2=x 2 z 2/y 6. Substituting X=x 2/y 2 X=x 2/y 2 and Y=x z/y 3 Y=x z/y 3 we get the elliptic curve E E defined by Y 2=X 3+3 X(2)(2)Y 2=X 3+3 X It turns out that the substitutions above set up a correspondence between rational points of E E whose first coordinate is a rational square and rational solutions of the equation x 4+3 y 4=z 2 x 4+3 y 4=z 2 up to a certain equivalence ((x,y,z)(x,y,z) is equivalent to (t x,t y,t 2 z)(t x,t y,t 2 z) for all t∈Q∗t∈Q∗; in particular the equivalence class contains integral solutions and therefore also a primitive solution). For details, see [1,p. 392–393]. Examples of solutions to (1): we start with the generator P=(1,2)P=(1,2) for the elliptic curve E E . Note that P P corresponds to the equivalence class of (1,1,2)(1,1,2). One can compute multiples of P P, for instance 2 P=(1/4,−7/8)2 P=(1/4,−7/8) and by the correspondence mentioned above conclude that x 2/y 2=1/4 x 2/y 2=1/4, so that a primitive solution to (1) has coordinates x=1,y=2 x=1,y=2 and z=7 z=7 (remember we restrict ourselves to nonnegative solutions). Similarly, 3 P=(121/9,−1342/27)3 P=(121/9,−1342/27), so since 121/9=(11/3)2 121/9=(11/3)2 we find the primitive solution (x,y,z)=(11,3,122)(x,y,z)=(11,3,122) already mentioned above. We can go on and find 4 P 4 P is 2209/784=(47/28)2 2209/784=(47/28)2, corresponding to the solution (47,28,2593)(47,28,2593) above, while 5 P 5 P gives (13,475,390794)(13,475,390794) and 6 P 6 P gives (7199,4026,58941127)(7199,4026,58941127). Explicit formulas Instead of giving numerical values, we can try to give formulas for the multiples of P P. Using the correspondence above, it then should be possible to generate all primitive solutions of (1). The idea is to express the coordinates of (n+1)P(n+1)P in terms of the coordinates of n P n P. Before we go into details, here are the formulas to go from primitive solution (x,y,±z)(x,y,±z) of (1) to a “larger” solution (we will explain below how to deal with the ±± sign in front of z z): x′=|2 x y−z|x′=|2 x y−z|, y′=|x 2−y 2|y′=|x 2−y 2| and z′=2|x y(x 2+3 y 2)−(x 2+y 2)z|z′=2|x y(x 2+3 y 2)−(x 2+y 2)z| It’s very well possible that gcd(x′,y′)>1 gcd(x′,y′)>1 and in this case we have to factor out the gcd to obtain a primitive solution. Ok, let’s start, so let Q=(u,v)≠P Q=(u,v)≠P be a multiple of P P. Then we can determine P+Q=(r,s)P+Q=(r,s) in terms of u u and v v. Keep in mind that (r,−s)(r,−s) (note the minus sign!) is the intersection of the elliptic curve E E with the line L L through P P and Q Q. Since L L can be described by the equation Y−2=λ(X−1)Y−2=λ(X−1) where λ=v−2 u−1 λ=v−2 u−1 we find, after substitution this in E E, that X 3−λ 2 X 2+(3−λ(2−λ))X−(2−λ)2=0(3)(3)X 3−λ 2 X 2+(3−λ(2−λ))X−(2−λ)2=0 As P,Q P,Q and −(P+Q)−(P+Q) are intersection points of E E and L L we know that 1,u 1,u and r r are solutions to (3), so the polynomial can be factored as (X−1)(X−u)(X−r)(X−1)(X−u)(X−r) and therefore u r=(2−λ)2 u r=(2−λ)2. From this it is immediate that if u u is a rational square, the same holds for r r. In fact, 2−λ=(2 u−v)/(u−1)2−λ=(2 u−v)/(u−1) and if we use the correspondence u=x 2/y 2,v=x z/y 3 u=x 2/y 2,v=x z/y 3, then r r turns out to be r=(2 x y−z)2(x 2−y 2)2(4)(4)r=(2 x y−z)2(x 2−y 2)2 Note that √r=|2 x y−z||x 2−y 2|r=|2 x y−z||x 2−y 2|, so to find a new solution to (1) we can use the correspondence above and put x′=|2 x y−z|x′=|2 x y−z| and y′=|x 2−y 2|y′=|x 2−y 2| though we should be careful concerning the sign of z z. For instance, based on the solution (x,y,z)=(1,2,7)(x,y,z)=(1,2,7) we get x′=3,y′=3 x′=3,y′=3. To get a primitive solution we should factor out the gcd, so that would give us the old solution (1,1,2)(1,1,2) corresponding to the point P P on E E. But if instead we use (x,y,z)=(1,2,−7)(x,y,z)=(1,2,−7) we do get a new solution with x′=11,y′=3 x′=11,y′=3 (this change in sign from 7 7 to −7−7 actually corresponds to replacing the point −2 P−2 P to 2 P 2 P). We still need to express s s, the second coordinate of P+Q P+Q, in terms of x,y,z x,y,z, because then we could deduce z′z′ from s=x′z′/y′3 s=x′z′/y′3. Since (r,−s)(r,−s) belongs to L L we have −s=λ r+2−λ−s=λ r+2−λ and this turns out to be −s=2(v−2 u)(u 2−u v+3 u−v)u(u−1)3−s=2(v−2 u)(u 2−u v+3 u−v)u(u−1)3 Again using the correspondence u=x 2/y 2,v=x z/y 3 u=x 2/y 2,v=x z/y 3, we find s=2(2 x y−z)(x y(x 2+3 y 2)−(x 2+y 2)z(x 2−y 2)3(5)(5)s=2(2 x y−z)(x y(x 2+3 y 2)−(x 2+y 2)z(x 2−y 2)3 Then z′=2|x y(x 2+3 y 2)−(x 2+y 2)z|z′=2|x y(x 2+3 y 2)−(x 2+y 2)z| COHEN, Henri; Number theory: Volume I: Tools and Diophantine Equations. Springer New York, 2007. A more elementary (and more restricted) approach Of course, this reduction to an elliptic curve is a beautiful result, but my first attempt was more elementary. I succeeded to derive explicit formulas in case y y is even. More precisely, if (x,y,z)(x,y,z) is a primitive solution with y y even and positive, then x,y,z x,y,z can be expressed in terms of a smaller solution: Suppose we have a primitive solution (R,S,M)(R,S,M), i.e. M 2=R 4+3 S 4 M 2=R 4+3 S 4 and gcd(R,S)=1 gcd(R,S)=1. If R R and S S have different parity then the triple (x,y,z)(x,y,z) with x=|R 4−3 S 4|,y=2 M R S x=|R 4−3 S 4|,y=2 M R S and z=M 4+12 R 4 S 4 z=M 4+12 R 4 S 4 is also a primitive solution. 2. If R R and S S are both odd, then the triple (x,y,z)(x,y,z) with x=|R 4−3 S 4|/2,y=M R S x=|R 4−3 S 4|/2,y=M R S and z=M 4/4+3 R 4 S 4 z=M 4/4+3 R 4 S 4 is a primitive solution. 3. Moreover, if (x,y,z)(x,y,z) is a primitive solution with y y even and positive, then there is a primitive solution (R,S,M)(R,S,M) such that by applying 1 or 2 above we obtain (x,y,z)(x,y,z). If we translate the above in terms of the elliptic curve, it turns out that going from (R,S,M)(R,S,M) to (x,y,z)(x,y,z) corresponds to doubling the point on the elliptic curve belonging to (R,S,M)(R,S,M). Unfortunately, if y y is odd, there doesn’t seem to be such a nice reduction to a smaller solution of equation (1). Details of the proof are below the line. We start with a primitive solution (x,y,z)(x,y,z) and let X=x 2,Y=y 2 X=x 2,Y=y 2. Then X 2+3 Y 2=z 2(6)(6)X 2+3 Y 2=z 2 Since gcd(X,Y)=1 gcd(X,Y)=1 we have to find - as an intermediate step - all primitive nonnegative solutions to (6) and later require that X,Y X,Y are perfect squares. Fortunately, the primitive solutions to (6) can be completely described as two mutually exclusive parameterized formulas given below. If Y Y is even, then X=|m 2−3 n 2|,Y=2 m n X=|m 2−3 n 2|,Y=2 m n and z=m 2+3 n 2 z=m 2+3 n 2 where m,n m,n are coprime nonnegative integers of different parity and m m not divisible by 3 3. If Y Y is odd, then X=|m 2−3 n 2|/2,Y=m n X=|m 2−3 n 2|/2,Y=m n and z=(m 2+3 n 2)/2 z=(m 2+3 n 2)/2, where m m and n n are coprime odd nonnegative integers and m m not divisible by 3 3. The way to find these solutions is to solve the equation u 2+3 v 2=1(7)(7)u 2+3 v 2=1 over the rationals (note that if we divide both sides of (6) by z 2 z 2 we get equation (7) with u=X/z,v=Y/z u=X/z,v=Y/z). Elsewhere I have shown how to solve the equation u 2+k v 2=1 u 2+k v 2=1 over the rationals (where k k is any positive integer), so here I will only sketch how to solve (7). The idea is to choose one particular rational solution, for instance (−1,0)(−1,0) and draw lines through this point with a rational slope. Such a line will have one other intersection point with the geometric figure defined by (7) (which is an ellipse). If the slope of the line is m/n m/n where m m and n n are coprime integers, then one finds after some calculations that u=m 2−3 n 2 m 2+3 n 2,v=2 m n m 2+3 n 2 u=m 2−3 n 2 m 2+3 n 2,v=2 m n m 2+3 n 2 From this it is not hard to obtain the two parameterizations above. Now let’s go back to a primitive solution (x,y,z)(x,y,z) of (1). So suppose x 4+3 y 4=z 2 x 4+3 y 4=z 2 where x x and y y are coprime, y y is even and y>0 y>0. Then by the first parameterization above we have x 2=|m 2−3 n 2|,y 2=2 m n x 2=|m 2−3 n 2|,y 2=2 m n and z=m 2+3 n 2 z=m 2+3 n 2 for suitable m,n≥1 m,n≥1. Since 2 m n 2 m n is a perfect square, we know that either m m or n n is even. Also note we cannot have x 2=−m 2+3 n 2 x 2=−m 2+3 n 2 by a mod 4 4 argument. Therefore x 2=m 2−3 n 2 x 2=m 2−3 n 2, so x 2+3 n 2=m 2 x 2+3 n 2=m 2 is the same type of equation as (6). This turns out to be crucial. Let’s now first assume that n n is even. From y 2=2 m n y 2=2 m n, it follows that m m and 2 n 2 n are perfect squares. Also x 2=m 2−3 n 2 x 2=m 2−3 n 2 implies that x 2+3 n 2=m 2 x 2+3 n 2=m 2 and we know this is a primitive solution to (6), so the first parameterization above applies: there are suitable r,s r,s with x=|r 2−3 s 2|,n=2 r s,m=r 2+3 s 2 x=|r 2−3 s 2|,n=2 r s,m=r 2+3 s 2. But recall that 2 n 2 n and m m are perfect squares from which we deduce that r r and s s are also perfect squares. Writing m=M 2,r=R 2,s=S 2 m=M 2,r=R 2,s=S 2 where M,R,S≥0 M,R,S≥0, the equation m=r 2+3 s 2 m=r 2+3 s 2 becomes M 2=R 4+3 S 4 M 2=R 4+3 S 4 which is the same type of equation as z 2=x 4+3 y 4 z 2=x 4+3 y 4. Since y 2=2 m n y 2=2 m n and n=2 r s n=2 r s, so 2 n=4 R 2 S 2 2 n=4 R 2 S 2, while m=M 2 m=M 2, we find y=2 M R S y=2 M R S. Also x=|r 2−3 s 2|=|R 4−3 S 4|x=|r 2−3 s 2|=|R 4−3 S 4|. Finally, since n=2 R 2 S 2 n=2 R 2 S 2, we have z=m 2+3 n 2=M 4+12 R 4 S 4 z=m 2+3 n 2=M 4+12 R 4 S 4 As an example of a solution to M 2=R 4+3 S 4 M 2=R 4+3 S 4 where r=R 2 r=R 2 and s=S 2 s=S 2 are coprime and of different parity we have (R,S,M)=(1,2,7)(R,S,M)=(1,2,7) and therefore (x,y,z)=(47,28,2593)(x,y,z)=(47,28,2593). One can check this is indeed a primitive solution. Next suppose n n is odd (while we still assume y>0 y>0 and y y is even). Then from y 2=2 m n y 2=2 m n and n n is odd, it follows that 2 m 2 m and n n are perfect squares. And from x 2+3 n 2=m 2 x 2+3 n 2=m 2 and n n odd, it follows that the second parameterization above applies, so focusing on m m and n n we find there are odd coprime integers r,s r,s with n=r s n=r s and m=(r 2+3 s 2)/2 m=(r 2+3 s 2)/2. Since both 2 m=r 2+3 s 2 2 m=r 2+3 s 2 and n=r s n=r s are perfect squares and since r r and s s are coprime, we can write r=R 2,s=S 2 r=R 2,s=S 2. Also, 2 m=M 2 2 m=M 2, for some positive integer M M and so R 4+3 S 4=M 2 R 4+3 S 4=M 2. As above, we find y 2=2 m n=M 2 R 2 S 2 y 2=2 m n=M 2 R 2 S 2, so y=M R S y=M R S. Moreover, x=|r 2−3 s 2|/2=|R 4−3 S 4|/2 x=|r 2−3 s 2|/2=|R 4−3 S 4|/2 and z=m 2+3 n 2=M 4/4+3 R 4 S 4 z=m 2+3 n 2=M 4/4+3 R 4 S 4. As an example of a solution to M 2=R 4+3 S 4 M 2=R 4+3 S 4 where r=R 2 r=R 2 and s=S 2 s=S 2 are coprime and odd, we have (R,S,M)=(1,1,2)(R,S,M)=(1,1,2) and so (x,y,z)=(1,2,7)(x,y,z)=(1,2,7). Similarly, (11,3,122)(11,3,122) gives (7199,4026,58941127)(7199,4026,58941127). So this solves the original equation in case y y is even. [For completeness, here are a few equations that can be derived in case y y is odd. Using a similar approach as above, there doesn’t seem to be a way to construct a smaller solution from which (x,y,z)(x,y,z) can be constructed. Of course, using the elliptic curve approach, we know there actually is such a smaller solution, but it doesn’t show up in the more elementary approach.] Let’s now suppose y y is odd. Then by the second parameterization above we have x 2=|m 2−3 n 2|/2,y 2=m n x 2=|m 2−3 n 2|/2,y 2=m n and z=(m 2+3 n 2)/2 z=(m 2+3 n 2)/2 for odd coprime m,n≥1 m,n≥1. Anyway, we see that m m and n n are perfect squares. However, we now get a different type of equation, namely from x 2=|m 2−3 n 2|/2 x 2=|m 2−3 n 2|/2 we obtain either 2 x 2+3 n 2=m 2 2 x 2+3 n 2=m 2 or 3 n 2=m 2+2 x 2 3 n 2=m 2+2 x 2. But the first equation cannot be solved as m,n m,n are odd, so m 2 m 2 and n 2 n 2 are 1(mod 8)1(mod 8) and then 2 x 2≡−2(mod 8)2 x 2≡−2(mod 8), so x 2≡−1(mod 4)x 2≡−1(mod 4), which is impossible. The second one can be solved: we have solutions to 3 n 2=m 2+2 x 2 3 n 2=m 2+2 x 2 with m=|r 2+4 r s−2 s 2|,x=|−r 2+2 r s+2 s 2|m=|r 2+4 r s−2 s 2|,x=|−r 2+2 r s+2 s 2| and n=r 2+2 s 2 n=r 2+2 s 2 However, from here, there seems to be no way to retrieve a smaller solution to (1) that depends somehow on r,s,m r,s,m. Upvote · 99 15 9 1 Related questions x+y=1−z x+y=1−z, x 3+y 3=1−z 3 x 3+y 3=1−z 3. How can one solve for the integers x,y,z x,y,z? How do I find all integer solutions x, y, z of the equation (x−y−1)3+(y−z−2)3+(z−x+3)3=18(x−y−1)3+(y−z−2)3+(z−x+3)3=18? What are all the integer solutions (x,y)(x,y) to the equation x 3+y 3=(x+y)2 x 3+y 3=(x+y)2? Algebra: How much is known about the Diophantine equation x 3+y 3+z 3=3?x 3+y 3+z 3=3? If x:y=2:3 and y:z=6:5, then what is x:Y:Z? What is the method for finding all solutions to x 3+y 3=z 2 x 3+y 3=z 2 for integers (x,y,z)(x,y,z)? What are the integer solutions of 1/x+1/y+1/z=1? How does one find all integer solutions to y^2=x^5-7? How many integer solutions does the equation x 3+y 3+z 3=x y z x 3+y 3+z 3=x y z have? If x:z=3:2 and y:z=1:2, then what is x:y:z? How can you find all integer solutions to the system x 3+y 3+z 3=1000 x 3+y 3+z 3=1000 and x+y+z=9190 x+y+z=9190 with x≥y≥z x≥y≥z? If x:y:z=3:4:5 and x+y+z=96, then what is z=? If x:y =3:5, and y:z=2:3, what does x:z equal? What are the integer solutions for x 3+y 3+z 3=7 x 3+y 3+z 3=7? Given x,y,z>0,x,y,z>0, and x 8+y 8+z 8=3,x 8+y 8+z 8=3, how do I show that x 4 y 3+y 4 z 3+z 4 x 3≥3 x 4 y 3+y 4 z 3+z 4 x 3≥3? Related questions x+y=1−z x+y=1−z, x 3+y 3=1−z 3 x 3+y 3=1−z 3. How can one solve for the integers x,y,z x,y,z? How do I find all integer solutions x, y, z of the equation (x−y−1)3+(y−z−2)3+(z−x+3)3=18(x−y−1)3+(y−z−2)3+(z−x+3)3=18? What are all the integer solutions (x,y)(x,y) to the equation x 3+y 3=(x+y)2 x 3+y 3=(x+y)2? Algebra: How much is known about the Diophantine equation x 3+y 3+z 3=3?x 3+y 3+z 3=3? If x:y=2:3 and y:z=6:5, then what is x:Y:Z? What is the method for finding all solutions to x 3+y 3=z 2 x 3+y 3=z 2 for integers (x,y,z)(x,y,z)? Advertisement About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
188840
https://ocw.mit.edu/courses/16-06-principles-of-automatic-control-fall-2012/abb629fb7c2e395ee8c6df58d4754329_MIT16_06F12_Lecture_23.pdf
16.06 Principles of Automatic Control Lecture 23 Stability Margins Stability margins measure how close a closed-loop system is to instability, that is, how large or small a change in the system is required to make it become unstable. The two commonly used measures of stability are the gain margin and the phase margin. • The gain margin (GM) is the factor by which the gain can be increased before the system becomes unstable. • The phase margin (PM) is the amount of additional phase lag that would make the phase be ´180˝ where |KGpjωq| “ 1. The GM and PM are important not only because they measure how close the closed-loop system is to instability, but also because they (but especially the PM) can be used to predict the transient behavior of the closed-loop system. Gain and phase margin on Nyquist diagram: Im(s) Re(s) PM -1 GM 1 GM and PM on Bode diagram: 1 10 −1 10 0 10 1 10 2 10 −4 10 −2 10 0 10 2 Magnitude 10 −1 10 0 10 1 10 2 −300 −200 −100 0 Phase (deg) Frequency, ω (rad/sec) GM PM -180 Relationship between PM and damping When the phase margin is small, the closed-loop system is close to instability, so that there will be closed-loop poles near the jω´axis. That is, low PM ñ low damping ratio. This result can be made explicit by considering the closed-loop system r a s(s+b) -The closed-loop transfer function is a T psq “ s2 bs a So, ωn “a b ζ “ ? 2 a Can show that, for this system, 2 ¨ ˛ ´1 ˝ ‚ PM “ tan ba 2ζ 1 ` 4ζ2 ´ 2ζ2 The functional form isn’t really important - the important point is that ζ is nearly a linear function of PM: 0 20 40 60 80 0 0.2 0.4 0.6 0.8 1 Phase Margin (deg) Damping Ration, ζ actual ζ ζ= PM 100 So can often predict (effective) damping ratio using approximation PM ζ « (PM in degrees) 100 Even when system is not second order, PM is a good predictor of peak overshoot (Mp), and resonant peak magnitude (Mr). PM is often specified as a design requirement. Bode’s Gain-Phase Relationship We saw that for poles and zeros in the left-half-plane, the phase of Gpjωq is proportional to the slope of the magnitude curve (on a log-log scale), but smeared-out. That is, =Gpjωq « 90˝ ˆ slope of |G| This idea can be made precise via Bode’s gain-phase theorem: For any stable, minimum phase system, the phase of Gpjωq can be determined uniquely from the magnitude of Gpjωq. 3 The phase is in fact given by ż 8 1 dM =Gpjωq “ W puqdu π du ´8 where M “ log |Gpjωq| (natural log) u “ logpω{ω0q dM “ slope of Bode plot magnitude du W puq “ weighting function 101 “ logpcothp qq 2 Note that this is a funny sort of convolution - we are convolving a weighting function with the slope of another function, but working on logarythmic axes! The weighting function looks like: W 5 4 3 2 1 -4 0 -2 2 4 u Note that 92% of area of W puq is within ˘1 decade of the center. So the phase is nearly completely determined by the slope of M within ~1 decade. Why is this result important? It implies that in almost every case, a well-designed control loop will have a magnitude plot with slope -1 at the crossovr frequency!1 1Actually, in some cases, the slope might be +1, but this is rare. 4 0 -2 -1 -2 ω ωC |GK| Typical loop gain for well-designed control system In this case, the phase at cross-over will be a weighted average of ´90˝ (weighted a lot), ´180˝ (weighted some), and 0˝ (weighted hardly at all). So the phase will be between ´90˝ and ´180˝, with probably reasonable PM. 5 MIT OpenCourseWare 16.06 Principles of Automatic Control Fall 2012 For information about citing these materials or our Terms of Use, visit:
188841
https://faculty.washington.edu/yenchic/18W_425/Lec5_survival.pdf
STAT 425: Introduction to Nonparametric Statistics Winter 2018 Lecture 5: Survival Analysis Instructor: Yen-Chi Chen Note: in this lecture, we will use the notations T1, · · · , Tn as the response variable and all these random variables are positive. These random variables will be called event time or death time. They often refer to certain ‘time’ characteristics of each individual, e.g., the time that the individual is dead/gets a disease. 5.1 Survival Function We assume that our data consists of IID random variables T1, · · · , Tn ∼F. The survival function S(t) of this population is defined as S(t) = P(T1 > t) = 1 −F(t). Namely, it is just one minus the corresponding CDF. Although this definition is extremely simple and seems to be very trivial from the CDF, later we will see that it turns out to be an elegant tool of modeling and interpreting the data. In medical research, the quantity Ti often refers to certain time characteristic of individual i. For instance, the variable T may refer to the age that the individual i passes away. Then the survival function S(t) can be interpreted as the chance that an individual is still alive after age t. If S(60) = 0.8, it means that there are 80% of the individuals in the population who will still be alive at the age 60. Namely, S(t) is the probability that an individual will survive past time t. Here are some basic properties about S(t): • S(0) = 1 and S(∞) = 0. • S(t) is a non-increasing function. A quantity that is often used along with the survival function is the hazard function. The hazard function is h(t) = lim ∆t→0 P(t < T1 ≤t + ∆t|T1 > t) ∆t = p(t) S(t), where p(t) = d dtF(t) is the PDF of random variable T1. Note that you can also write the hazard function as h(t) = −∂log S(t) ∂t . How can we interpret the hazard function? The hazard function describes the ‘intensity of death’ at the time t given that the individual has already survived past time t. There is another quantity that is also common in survival analysis, the cumulative hazard function. The cumulative hazard function is H(t) = Z t 0 h(s)ds. 5-1 5-2 Lecture 5: Survival Analysis You can interpret H(t) as the cumulative amount of hazard up to time t. The cumulative hazard function and survival function as linked as follows: H(t) = −log S(t), S(t) = e−H(t) = e− R t 0 h(s)ds. Example 1. What is the survival function and hazard function of an exponential R.V.? Let T1 ∼Exp(λ). Then p(t) = λe−λt, F(t) = 1 −e−λt for t ≥0 Thus, S(t) = e−λt and h(t) = λ, H(t) = λt. Namely, in an exponential distribution, the hazard function is a constant and the cumulative hazard is just a linear function of time. Example 2 (Weibull distribution). The Weibull distribution is a distribution with two parameters, λ and k, and it is a distribution for positive random variable. Its PDF is p(t) = λk · (λt)k−1 · e−(λt)k, t ≥0. When k = 1, it reduces to the exponential distribution. Its CDF and survival function are F(t) = 1 −e−(λt)k, S(t) = e−(λt)k. And the hazard function and cumulative hazard function are h(t) = λk · (λt)k, H(t) = (λt)k. 5.1.1 Estimating the Survival Function: Simple Method How do we estimate the survival function? There are three methods. The first method is a parametric approach. This method assumes a parametric model (e.g., exponential distribution) of the data and we estimate the parameter first then form the estimator of the survival function. A second approach is to compute the EDF first and then converted it to an estimator of the survival function. The last approach is a powerful nonparametric method called the Kaplan-Meier estimator and we will discuss it in the next section. Parametric Approach. Assume that we model the distribution as an exponential distribution with unknown parameter λ. An estimator of λ is (you can check HW01 to see why this is an estimator) b λ = 1 ¯ Tn = n Pn i=1 Ti . Then we estimate the survival function using b S1(t) = b λe−b λt = e− t ¯ Tn ¯ Tn , t ≥0. EDF Approach. Recall that the EDF b F(t) will be b F(t) = 1 n n X i=1 I(Ti ≤t). Lecture 5: Survival Analysis 5-3 Then the survival function can be estimated by b S2(t) = 1 −b F(t) = 1 n n X i=1 I(Ti > t). 5.1.2 Kaplan-Meier estimator Let t1 < t2 < · · · < tm be the time point where the observations T1, · · · , Tn actually take values. To see how the estimator is constructed, we do the following analysis. We partition the time axis into disjoint segments: B0 = [0, t1), B1 = [t1, t2), · · · , Bm−1 = [tm−1, tm), Bm = [tm, ∞). Then we define Nℓ= number of individuals alive at (event happens after) the beginning of Bℓ= n X i=1 I(Ti ≥tℓ) and Dℓ= number of individuals die (event happens at) in Bℓ= n X i=1 I(Ti ∈Bℓ). Now we have converted T1, · · · , Tn to (N0, D0), · · · , (Nm, Dm). Formally, Nℓshould be defined as the number of individuals at risk at the beginning of Bℓ. Later we will explain what does the at risk means. The Kaplan-Meier (KM) estimator estimates S(t) using b SKM(t) = Y ℓ:tℓ≤t  1 −Dℓ Nℓ  . What is the intuition of the KM estimator? We now consider t in different time segments and see if we can gain some intuitions. Recall that the survival function S(t) = P(T > t) = Probability of surviving past time t. For t ∈B0 = [0, t1), there is no event happens within this interval so b SKM(t) = 1. For t ∈B1 = [t1, t2), the survival function S(t) = P(T > t) = P(survives past time t) = P(survives in [0, t1) and in [t1, t)) = P(survives in B0 and in B1). Now recall that for two events A and B, P(A and B) = P(A)P(B|A). Thus, S(t) = P(survives in B0 and in B1) = P(survives in B0)P(survives in B1|survives in B0). The probability P(survives in B1|survives in B0) can be estimated using b P(survives in B1|survives in B0) = N1 −D1 N1 = 1 −D1 N1 and because no event occurs in B0, P(survives in B0) = 1. Thus, b SKM(t) = 1 ×  1 −D1 N1  . 5-4 Lecture 5: Survival Analysis Now for the next time segment B2, we apply the same intuition. Namely, for t ∈B2, S(t) = P(survives in B0)P(survives in B1|survives in B0)P(survives in B2|survives in B1), where we can estimate P(survives in B2|survives in B1) via b P(survives in B2|survives in B1) = 1 −D2 N2 , which leads to b SKM(t) = 1 ×  1 −D1 N1  ×  1 −D2 N2  . For the other segments, we can apply the same procedure to obtain the estimator. This gives you the intuition of how the KM estimator is constructed. This derivation can also be seen in ~ifischer/Intro_Stat/Lecture_Notes/8_-Survival_Analysis/8.2-_Kaplan-Meier_Formula.pdf. Note that when we observe every individual’s event time (namely, there is no censoring – a mechanism we will discuss later), the KM estimator and the EDF approach are the same. 5.1.3 Nelson-Aalen estimator Nelson-Aalen (NA) estimator is another powerful estimator of the survival function. It not only estimates the survival function but also provides an estimate of the cumulative hazard. Actually, NA estimator first estimate the cumulative hazard function and then convert it into an estimate of the survival function using the relation S(t) = e−H(t). Here is an intuition about how this estimator is constructed. Recall that the KM estimator uses b SKM(t) = Y ℓ:tℓ≤t  1 −Dℓ Nℓ  . as an estimate of S(t). When Dℓis much smaller than Nℓ, we have e− Dℓ Nℓ≈1 −Dℓ Nℓ . Therefore, b HKM(t) = −log b SKM(t) = −log Y ℓ:tℓ≤t  1 −Dℓ Nℓ  = − X ℓ:tℓ≤t log  1 −Dℓ Nℓ  ≈− X ℓ:tℓ≤t log e− Dℓ Nℓ = X ℓ:tℓ≤t Dℓ Nℓ . Using the above derivation, the NA estimator estimates the cumulative hazard function by b HNA(t) = X ℓ:tℓ≤t Dℓ Nℓ Lecture 5: Survival Analysis 5-5 and then estimate the survival function as b SNA(t) = e−b HNA(t) = e−P ℓ:tℓ≤t Dℓ Nℓ= exp  − X ℓ:tℓ≤t Dℓ Nℓ  . The theoretical analysis of the KM and NA estimators (such as the expectation and variance) involve some non-trivial algebra. If you are interested in, I would recommend the following lecture note http: //www4.stat.ncsu.edu/~dzhang2/st745/chap2.pdf. 5.2 Censoring However, in reality, our data may not be so nice. We may not be able to observe the actual event time Ti because of many complications. For instance, in a medical research, individuals may leave the study (called dropout) so we only observe their leaving time instead of the actual death time. The phenomena that we sometimes cannot observe the actual time but a ‘censoring time’ is called censoring in Statistics. To model this process, we often need to introduce two other variables: Y and C. The T is the actual event time of interest and C is the censoring time that is competing with T and Y is the actual observing time. In most cases, we will consider the right-censoring problem where the three variables are related by Y = min{T, C}. We will assume that T and C are independent. Note that if what we observe is Y = max{T, C}, this problem is called a left-censoring problem. Moreover, we not only observe Y , we also know if this Y comes from the event time or censoring time. Namely, we have one extra variable δ such that δ = I(T < C). When we only observe (Y1, δ1), · · · , (Yn, δn) instead of T1, · · · , Tn, how can we infer the survival function T1? This is the central question to many biostatistical research. Because we have several R.V.s now, we will add subscript to denote the functions associated to each random variable. Namely, FT , ST , hT , HT are the CDF, survival function, hazard function, and cumulative hazard function of random variable T and FC, SC, hC, HC are those of random variable C and FY , SY , hY , HY are those of random variable Y . Here are some relations among these functions. • SY (t) = P(Y > t) = P(min{T, C} > t) = P(T > t)P(C > t) = ST (t)SC(t). Namely, the survival function of Y is the product of the other two survival functions. • FY (t) = 1 −(1 −FT (t))(1 −FC(t)) = FT (t) + FC(t) −FT (t)FC(t). • pY (t) = pT (t) + pC(t) −pT (t)FC(t) −pC(t)FT (t) = pT (t)SC(t) + pC(t)ST (t). The PDF of Y is the sum of the weighted PDF of the other two and the weight is the survival function. • hY (t) = hT (t) + hC(t). Namely, the hazard function of Y is the summation of the other two. • HY (t) = HT (t) + HC(t). Similarly, the cumulative hazard is also the sum of the other two. Note that δ is just a Bernoulli random variable with probability being 1 as P(T < C). 5-6 Lecture 5: Survival Analysis 5.2.1 Estimating the Survival Function in Censoring When there is censoring, the EDF approach no longer works. However, the KM and NA estimators are still valid. Essentially, the estimator is the same but we need to modify a little bit about Nℓand Dℓ. As we have mentioned, formally, Nℓshould be defined as Nℓ= number of individuals at risk at the beginning of Bℓ. What does the phrase at risk means? It refers to as being alive and not censored so it can be modified by replacing Ti with Yi. Thus, Nℓ= n X i=1 I(Yi ≥tℓ). For the quantity Dℓ, it is still the number of events in the interval Bℓbut we need to modify it by the number of observed events in the interval. Therefore, Dℓ= n X i=1 I(Yi ∈Bℓ, δi = 1). Using these two modifications, the KM estimator and NA estimator are b SKM(t) = Y ℓ:tℓ≤t  1 −Dℓ Nℓ  b SNA(t) = exp  − X ℓ:tℓ≤t Dℓ Nℓ  . Note that parametric models may still be applicable during the censoring case and the estimator is often done using a maximum likelihood approach, which is beyond the scope of this course so we will not cover it here. Here is a lecture note about this topic: 5.3 Cox Model In reality, we often not only observe the event time for an individual but also have access to other covariates of this individual. We often are interested in understanding how these covariates affect the survival function of the event. For instance, in a cancer study, we may have each individual’s age when they got cancer (the event time T) and this individual’s gender, BMI, smoking habit, and education level. The other variables are the covariates in this study. Health scientists are often interested in how these covariates change the survival function. Let X denotes the covariates. A parameter of interest will be the survival function of T given X. Namely, it is the conditional survival function S(t|x) = P(T > t|X = x). For instance, we may be interested in S(Age = t|(gender, BMI, smokinghabit, educationlevel) = (male, 20, neversmoke, college)). We can then define the conditional hazard function and conditional cumulative hazard function as h(t|x) = −∂log S(t|x) ∂t , H(t|x) = −log S(t|x). Lecture 5: Survival Analysis 5-7 The Cox (proportional hazard) model is one of the most popular model combining the covariates and the survival function. It starts with modeling the hazard function h(t|X = x): h(t|X = x) = h0(t) exp(xT β), where β is the vector of coefficients of each covariate. The function h0(t) is called the baseline hazard function. Namely, the Cox model assumes that the covariates have a linear multiplication effect on the hazard function and the effect stays the same across time. This implies the conditional hazard function being H(t|x) = exp(xT β) Z t 0 h0(s)ds = exp(xT β)H0(t), where H0(t) is the baseline cumulative hazard function. This further yields the conditional survival function S(t|x) = exp(−H(t|x)) = exp −exp(xT β)H0(t)  = exp (−H0(t))exp(xT β) = S0(t)exp(xT β), where S0(t) is called the baseline survival function. Why it is called a proportional hazard model? Here is an intuition about it. Consider two individuals with different covariates that one has X = x1 and the other has X = x2. The ratio of their hazard function h(t|x1) h(t|x2) = h0(t) exp(xT 1 β) h0(t) exp(xT 2 β) = exp(xT 1 β) exp(xT 2 β) = exp((x1 −x2)T β) is a constant over time. Namely, h(t|x1) = exp((x1 −x2)T β) × h(t|x2) ∝h(t|x2) ∀t ≥0. Thus, their hazard is always proportional to each other regardless of the value of time t. Estimation of the parameter β is often done by maximizing the partial likelihood function: b Ln(β) = n Y i=1 Li(β), where Li(β) = h(Ti|Xi) P j:Tj≥Ti h(Tj|Xj) = exp(XT i β) P j:Tj≥Ti exp(XT j β). Namely, our estimator b βn = argmaxβ b Ln(β). This estimator turns out to be an unbiased estimator and has variance shrinking at rate O(n−1) and has asymptotic normality under suitable condition. An interesting fact is that we do not need to know the baseline hazard function h0(t) to estimate β! (estimating h0(t) is not easy and the convergence rate is often slow; we will discuss a similar pattern in density estimation) The property that we can estimate parameter of interest without estimating the entire model is related to the topic semi-parametric model1. Note that the detailed analysis an derivation is beyond the scope of this course (you may learn it in a course called ‘survival analysis’). If you want to learn more, I would recommend the following two lecture notes: • • 1
188842
https://www.youtube.com/watch?v=f_W8vsEF488
Condition for parallelism and perpendicularity interms of their slopes| Class11|Maths|Straight Lines Math e Tricks 6050 subscribers 93 likes Description 3830 views Posted: 25 Dec 2021 Conditions for parallelism and perpendicularity of the lines in terms of their slopes | Class11 |Maths|Straight Lines. In this video we will see Conditions for parallelism and perpendicularity of the lines in terms of their slopes explained in simple and understandable way. Link to MY FIRST YOUTUBE VEDIO:- Link to My journey with math e tricks:- Link to Straight Lines Class 11 introduction: Link to Slope of a line:- Copyright :-Everything you see on this video was created by me (Sumaya sarfaraz) unless otherwise stated. Please do not use any content without first asking permission at sumayasarf15@gmail.com StraightLines #MathsTeacher #straightlines #cbseclass #class11 #maths 14 comments Transcript: हेलो हाय गाइस वेलकम और वेलकम बैक टू माय चैनल आ कि इन थिस वीडियो विल सी कंडीशन फॉर पार्लिमेंट पर्टिकुलर कि प्लांट्स इन टर्म्स आफ थिस रूम इन थिस वीडियो यू हैव सीन ए लोट अप्लाई subscribe The Channel Please subscribe our ब्लुटूथ एंड इश्यूड बाय एक्सिस है तो एक्सप्लेन थिस डिस्ट्रॉयड थे विक्टिम्स स्टिल वांट हुआ था nh2 हैं तथा व्यापारी ने Twitter The लाइन से ज्वाइन इस मेकिंग एंड अल्फा वन लौंग पॉजिटिव डायरेक्शन आफ एक्सिस बैंक व्हाट इज द अपोजिट ऑफ फ्लाइंग जट्ट वॉइस मेल वन पिंच नथिंग बट m1 विनोद ऑफिस रिप्लेसिड विद ए सिल्वर लाइनिंग एल्बम्स और 12512 है कि ने ली मोदी के खिलाफ अल्टो सदस्यों अप्लाइड टो तोए इज इक्वल टू एंड 2012 टर्नऑफ अलवर 2012 2012 143 युधिष्ठिर ट्रेंगल पॉइंट इज इक्वल टू वन इज इक्वल टू ए नाइफ इन हेयर सॉफ्ट पैनल पवन इज इक्वल टू टर्न ऑफ दो ए डिस्टेंस आफ वन टू टू टू टू टू टू टू टू टू टू टू टू टू टू कि एब्स क्लियर नॉलेज सी व्हाट इज द कंडीशन इस टू लाइंस अपॉइंटेड प्रेसिडेंट प्रेसिडेंट क्लेरिटी ओं के अंदर से जो वायरेसेस ए स्वाइप प्लैनेट्स डिप्रेशन टू लाइंस सेलेब्स कॉट off-guard मैं इसमें किंडल फायर ओं पॉजिटिव इफैक्ट्स आफ कजिन एंड इक्वल संरक्षण दिस एंड इलेक्ट्रिक अल्फा जो अलार्म्स मैंने गाना चलाएं मुझे इस पर्टिकुलर तो एल्बम तुरंत एक लाइक दिस इज के सदस्य अल्टो अपने वॉइस मेल क्लियर एंड बेटा लौंग डिपॉजिटेड इन फैक्टरीज इन द प्रेसिडेंट ऑफ कर दो कि हेडेड टो हिस मीटिंग्स एंड नॉटी इस पॉइंट को म्यूट द लाइंस एंड वन राइट ए नाइस दिस इज माय सिटी विच मिंस डिसोल्यूशन आईटी मैं ऑफिस साइट ऑफिस एंड ट्राइड नॉट टो स्पीक व्हाट इज दिस ए 16 बीटा इज इक्वल टू 9 प्लेस आफ ए प्रॉपर्टी नौकरी प्लस आलथॉ थे एक्सटीरियर एंड इंटीरियर एंगल्स आफ ए कि 19th प्लस ऑल फाइव सिंपल पास्ट टेंस बीटा इज इक्वल टू 9 9 4 - कॉटन फॉर ऊ ए सईंया वह 10बी टाइम विच आईएस नथिंग बट थिस लाइफ लाइन एंड टू दर ट्रीस एंड टू राइट कि पिछली कुल टू - कौटिल्य ई कैन राइट एस माय चैनल फ्रॉम राइट टो लेफ्ट कंटिन्यू शेयर बिहार एंड इजी टू माइनस वन चैनल को है तो जैसे इंप्लाइज एंड टू इज इक्वल टू - व्वे इस चैनल फाइनल फाइनल लिस्ट ऑफ लाइन जीमेल 131 सब्सक्राइब - लायंस क्लब स्कूल - 1929 साइंटिफिक - 150th कमेंट्स को सिलेक्ट इस प्रॉब्लम है रिक्शे इस लाइन टू द प्वाइंट - टू को 659 इस पर्टिकुलर Tubelight उस पॉइंट से 1 मिनट वेल एंड फाइंड द वैल्यू ऑफ एक्स 2.2 - 151 - सब्सक्राइब टो में अपील लाइन हाउ टू दि प्वाइंट है - 2 क्लास सिक्स एंड फोर को मेरिट आप सभी मित्रों एवं विशेष लोक एंव वाइट - व्वे इष्ट - एक्सप्लेन कि विनोद इस कमलाबेन टू कौन 312 के सदस्यों एक्टिव 1628 टू स्विच बोर्ड 2.28 - 64 - - - - - - - 600 ई एक बार सिलेक्ट - लाइफ लाइन h2 स्लोप ऑफ में लाएं पासिंग थ्रू शब्द द प्वाइंट में एक कमांड टू आल एंड एक्स कॉमेडी 4 ऊ हुआ था सिलेक्टेड स्लोप एंड 2012 वाइट - यौवन बाय ऋतु - 1 अयन सॉफ्टवेयर इन दिस इज एक्टिव विड ऊ कल सुबह टू स्पीच फॉर रिक्रूटमेंट फॉर माइनस वन इज इट वेल बॉएड ₹2 लैक्स - 1864 बेलूर मठ - यॉट सब्सक्राइब टो कंटिन्यू इन दिशाओं में शब कुसुम रिकॉर्ड m1 एंड M2 एंड क्वेश्चन देवता Dr टू लाइंस आफ लिक्विड पेन किलर का सेवन - न है तो हियर एमवाई नोटिस एंव 123 इस पीपल बाय इट्स - 8 - पॉर्न मैं सुरक्षित फायदे सूत्रीय 103 फॉर विदाउट वेल्थ विल जस्ट वेट फॉर ब्वॉय डांस - 8 2012 - फ्रॉम दिस इंप्लाइज फॉर इज इक्वल टू माइनस वन - विल बी प्लस माइनस प्लस माइनस माइनस माइनस प्लस आठ लाइक शेयर सब्सक्राइब 28 - 4 विकेट्स इन clt20 ए ए बेसिस क्लियर कि इम्तियाज क्वेश्चन से टू लाइंस अ पर्ल टीचर इन थिस क्वेश्चन ओं मैं सिर्फ एंड वैल्यूज आफ ऑल ठोसे व्हो वांट एंड वेंट टो दो एंड विडेनिंग आफ एक्स एक्स को स्विच बेसिक्स क्लियर तो इस विडियो सीएनएन टेक केयर ऑलमाइटी ब्लेस ऊ [प्रशंसा] कर दो
188843
https://secure-media.collegeboard.org/digitalServices/pdf/ap/ap16_chemistry_q1.pdf
A student investigates the enthalpy of solution, Hsoln, for two alkali metal halides, LiCl and NaCl. In addition to the salts, the student has access to a calorimeter, a balance with a precision of ±0.1 g, and a thermometer with a precision of ±0.1C. (a) To measure Hsoln for LiCl, the student adds 100.0 g of water initially at 15.0C to a calorimeter and adds 10.0 g of LiCl(s), stirring to dissolve. After the LiCl dissolves completely, the maximum temperature reached by the solution is 35.6C. (i) Calculate the magnitude of the heat absorbed by the solution during the dissolution process, assuming that the specific heat capacity of the solution is 4.18 J/(g·°C). Include units with your answer. q = mcT = (110.0 g)(4.18 J/(g·°C))(35.6°C  15.0°C) = 9,470 J = 9.47 kJ 1 point is earned for the correct setup. 1 point is earned for the correct answer with units. (ii) Determine the value of Hsoln for LiCl in kJ/molrxn. 1 mol LiCl 10.0 g LiCl 0.236 mol LiCl 42.39 g LiCl   9.47 kJ 0.236 mol LiCl -= 40.1 kJ/molrxn 1 point is earned for the number of moles of LiCl. 1 point is earned for the correct Hsoln and the correct sign. To explain why Hsoln for NaCl is different than that for LiCl, the student investigates factors that affect Hsoln and finds that ionic radius and lattice enthalpy (which can be defined as the H associated with the separation of a solid crystal into gaseous ions) contribute to the process. The student consults references and collects the data shown in the table below. Ion Ionic Radius (pm) Li+ 76 Na+ 102 (b) Write the complete electron configuration for the Na+ ion in the ground state. 1s2 2s2 2p6 1 point is earned for the complete correct configuration. (c) Using principles of atomic structure, explain why the Na+ ion is larger than the Li+ ion. The valence electrons in the Na+ ion are in a higher principal energy level than the valence electrons in the Li+ ion. Electrons in higher principal energy levels are, on average, farther from the nucleus. 1 point is earned for a correct explanation based on occupied principal energy levels. (d) Which salt, LiCl or NaCl, has the greater lattice enthalpy? Justify your answer. LiCl. Because the Li+ ion is smaller than the Na+ ion, the Coulombic attractions between ions in LiCl are stronger than in NaCl. This results in a greater lattice enthalpy. 1 point is earned for the correct choice and justification. (e) Below is a representation of a portion of a crystal of LiCl. Identify the ions in the representation by writing the appropriate formulas ( Li+ or Cl) in the boxes below. See diagram above. 1 point is earned for both identifications. (f) The lattice enthalpy of LiCl is positive, indicating that it takes energy to break the ions apart in LiCl. However, the dissolution of LiCl in water is an exothermic process. Identify all particle-particle interactions that contribute significantly to the exothermic dissolution process being exothermic. For each interaction, include the particles that interact and the specific type of intermolecular force between those particles. There are interactions between Li+ ions and polar water molecules and between Clions and polar water molecules. These are ion-dipole interactions. 1 point is earned for identifying the particles that interact. 1 point is earned for correctly identifying the type of interaction. Cl Li+ © 2016 The College Board. Visit the College Board on the Web: www.collegeboard.org. © 2016 The College Board. Visit the College Board on the Web: www.collegeboard.org. © 2016 The College Board. Visit the College Board on the Web: www.collegeboard.org. © 2016 The College Board. Visit the College Board on the Web: www.collegeboard.org. AP® CHEMISTRY 2016 SCORING COMMENTARY © 2016 The College Board. Visit the College Board on the Web: www.collegeboard.org. Question 1 Overview This question assessed the students’ understanding of a range of chemical concepts concerning the physical properties of ionic compounds. The students were asked to comment on a series of scenarios that dealt with a single ionic compound (LiCl), as well as compare/contrast the physical properties of different cations (Li+ vs. Na+) or ionic compounds (LiCl vs. NaCl). In part (a) students were presented with experimental calorimeter data, obtained from the dissolution of LiCl in water, and asked to calculate the magnitude of the heat flowing between the system and its surroundings. With the calculated value for heat energy, students were then asked to calculate the change in enthalpy of solution of LiCl (∆Hsoln) in units of kJ/molrxn. In part (b) students were asked to write out the complete electron configuration for sodium ion (Na+). In part (c) students were asked to explain why the ionic radius of Na+ is greater than that of Li+. In part (d) students were asked to determine which salt, LiCl or NaCl, has the greater lattice enthalpy and to justify their selection. In part (e) students were provided a diagram of a typical three-dimensional lattice structure composed of small and large ions (represented as black and gray circles, respectively). Students were asked to label each type of circle with the correct ions, either Li+ or Cl−. In part (f) students were presented with a dissolution scenario that offered some information about the thermodynamic properties involved in such a physical change. The students were then asked to identify the particles that are primarily involved in the exothermic process of dissolution and to identify the primary type of interaction that occurs between the particles. Sample: 1A Score: 10 This response earned 10 out of 10 possible points. The student earned both points in part (a)(i) for correctly setting up and calculating the amount of heat, with the correct units, absorbed during the dissolution process (inserting a value for ∆T, rather than showing the subtraction of Tf – Ti is accepted). In part (a)(ii) the student correctly calculated the number of moles of LiCl dissolved in water and then used that quantity, along with the value of q from part (a)(i), to correctly calculate the value for ∆Hsoln. To complete the second point in part (a)(ii) the student indicated that the dissolution process was exothermic by including the negative sign in ∆Hsoln. The student earned the point in part (b) for giving the complete electron configuration for Na+. The point in part (c) was earned by indicating that the sodium ion has a full (occupied) second energy level, whereas the lithium ion only has one energy level. The second sentence of the response for part (c) is not required to earn the point. In part (d) the student correctly identified LiCl as having the larger lattice enthalpy. The point was then earned for a discussion of the smaller lithium ion radius that results in a greater Coulombic attraction between lithium and chloride ions relative to the attractions between sodium and chloride ions, and that more energy is therefore required to separate LiCl into its ions. The point in part (e) was earned for placing Cl− in the box with the arrow pointing toward the large gray circle and Li+ in the box with the arrow pointing toward the small black circle. Although the student goes into great detail about the various interactions (both endothermic and exothermic) associated with dissolution, the student earned 1 point in part (f) by stating that the lithium and chloride ions are surrounded by water molecules with the “oxygens pointed towards Li+” and the “hydrogens pointed towards Cl−.” The second point was earned for stating that ion-dipole attractions are the intermolecular force between the ions and water. AP® CHEMISTRY 2016 SCORING COMMENTARY © 2016 The College Board. Visit the College Board on the Web: www.collegeboard.org. Question 1 (continued) Sample: 1B Score: 8 This response earned 8 out of 10 possible points. The student earned both points in part (a)(i) for correctly setting up the heat equation and for obtaining the correct value, with proper units, for the heat absorbed. Although the student did not explicitly report a value for the number of moles of LiCl in part (a)(ii), the setup of the mole calculation within the calculation for ∆Hsoln is correct, so the first point was earned. To earn the second point in part (a)(ii) the student calculated the value of ∆Hsoln and indicated that the dissolution process was exothermic by including the negative sign in ∆Hsoln. The student earned the point in part (b) for writing the complete electron configuration for Na+. The student did not earn the point in part (c). The argument that Na+ has “more” electrons or “more” energy levels is not sufficient to justify the sodium ion’s larger radius. The student earned the point in part (d) for indicating that there is a greater attraction between lithium and chloride ions because they are closer together due to lithium’s smaller radius. The point in part (e) was earned for placing Cl− in the box with the arrow pointing toward the large gray circle and Li+ in the box with the arrow pointing toward the small black circle. The student earned the first point in part (f) for naming the particles that are interacting with one another. The second point was not earned because the student identifies the particle interactions as being either dipole-dipole forces or ionic bonds. Sample: 1C Score: 6 This response earned 6 out of 10 possible points. The student earned both points in part (a)(i) for correctly setting up the heat equation and for obtaining the correct value, with proper units, for the heat absorbed. The student did not earn either point in part (a)(ii) for stating that the dissolution is exothermic. The point for the electron configuration in part (b) was earned. In part (c) the student explains the larger radius of Na+ by stating that it has “more” energy levels (than Li+) without emphasizing the electron occupation of those energy levels; therefore, the point was not earned. The point in part (d) was not earned for the “larger molar mass” argument. The point in part (e) was earned for placing Cl− in the box with an arrow pointing toward the large gray circle and for placing Li+ in the box with an arrow pointing toward the small black circle. Both points were earned for the student’s response in part (f). The particles of interest are listed as H2O, Li+, and Cl−, and the primary intermolecular force is identified as ion-dipole.
188844
https://byjus.com/maths/director-circle-of-hyperbola/
In Geometry, we have learned different types of conic sections and hyperbola is one among them. A hyperbola is defined as the locus of points whose distance from the fixed point called focus from the fixed-line called directrix is a constant. For hyperbola, the eccentricity is greater than 1. (i.e) e>1. In this article, we are going to discuss one of the parameters called the “Director Circle of Hyperbola” in detail with many solved examples. What is the Director Circle of Hyperbola? The director circle of the hyperbola is defined as a locus of the point of intersection of the two perpendicular tangents to the hyperbola. We know that the standard equation of hyperbola is (x2/a2) – (y2/b2) = 1. Thus, the equation of the director circle of a hyperbola is derived from the standard form. Equation of Director Circle of Hyperbola The equation of the director circle of the hyperbola (x2/a2) – (y2/b2) = 1 is x2+y2 = a2 – b2. The centre of the circle is the origin (i.e.) (0, 0) and the radius is √(a2-b2) Three Cases of Director Circle of Hyperbola Case 1: b2= a2 If b2 = a2, then the radius of the circle is zero, and it reduces the point at the centre of the circle (origin). So, in this case, the centre is the only point from where we can draw a tangent at the right angle to the hyperbola. Case 2: b2< a2 If b2 < a2, then the director circle of the hyperbola is real. Case 3: b2>a2 If b2 > a2, the radius of the circle is imaginary. In this case, there should not be any circle and no tangents at right angles can be drawn to the circles. Director Circle of Hyperbola Examples Go through the following examples to understand the director circle of a hyperbola: Example 1: Determine the equation of the director circle of the hyperbola (x2/16) – (y2/4) = 1 Solution: Given: Hyperbola equation is (x2/16) – (y2/4) = 1 We know that the equation of the director circle of the hyperbola (x2/a2) – (y2/b2) = 1 is x2+y2 = a2 – b2. By comparing the given equation and standard form, we have a2 = 16 and b2 = 4 ⇒ x2+y2 = 16 – 4 ⇒ x2+y2 = 12. Therefore, the required equation for the director circle of a hyperbola is x2+y2 = 12. Example 2: Find the equation and the diameter of the director circle of the hyperbola (x2/49) – (y2/25) = 1 Solution: Given hyperbola equation: (x2/49) – (y2/25) = 1 …(1) The standard form of hyperbola is (x2/a2) – (y2/b2) =1 …(2) As we know, the director cirlce of hyperbola equation is x2+y2 = a2-b2 By comparing (1) and (2), we get a2 = 49 and b2 = 25 Thus, the required director circle of hyperbola equation is x2+y2 = 49 – 25 ⇒ x2+y2 = 24. We know that the radius of the director circle = √(a2-b2) = √24. As the diameter is twice the radius, the diameter of the director circle of the hyperbola is 2√24. Example 3: Find the equation of director circle of the hyperbola (x2/100) – (y2/36) = 1 Solution: Given: (x2/100) – (y2/36) = 1 Here, a2 = 100 and b2= 36 Substituting the values in the director cirlce equation, we get ⇒ x2+y2 = 100 – 36 ⇒ x2+y2 = 64, which is the required director circle equation of hyperbola. Video Lessons on Circles Introduction to Circles 12,36,867 Parts of a Circle 1,32,375 Area of a Circle 4,01,035 All about Circles 7,58,516 To learn more Maths related concepts, stay tuned to BYJU’S – The Learning App and download the app today and explore more videos. Comments Leave a Comment Cancel reply Register with BYJU'S & Download Free PDFs
188845
https://www.mathway.com/popular-problems/Algebra/200092
Algebra Examples x2+x−2 Step 1 Consider the form x2+bx+c. Find a pair of integers whose product is c and whose sum is b. In this case, whose product is −2 and whose sum is 1. −1,2 Step 2 Write the factored form using these integers. (x−1)(x+2) x2+x−2 | | | | --- | 20 | =0 | >0 | ( ) ) | | [ [ ] ] √ √   ≥ ≥           7 7 8 8 9 9       ≤ ≤           4 4 5 5 6 6 / / ^ ^ × ×     ∩ ∩ ∪ ∪   1 1 2 2 3 3 - - + + ÷ ÷ < <     π π ∞ ∞  , , 0 0 . . % %  = =     Cookies & Privacy This website uses cookies to ensure you get the best experience on our website. More Information ⎡⎢⎣x2 12 √π ∫xdx ⎤⎥⎦ Please ensure that your password is at least 8 characters and contains each of the following: a number a letter a special character: @$#!%?& Do Not Sell My Personal Information When you visit our website, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the site and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. You cannot opt-out of our First Party Strictly Necessary Cookies as they are deployed in order to ensure the proper functioning of our website (such as prompting the cookie banner and remembering your settings, to log into your account, to redirect you when you log out, etc.). More information Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Sale of Personal Data Under the California Consumer Privacy Act, you have the right to opt-out of the sale of your personal information to third parties. These cookies collect information for analytics and to personalize your experience with targeted ads. You may exercise your right to opt out of the sale of personal information by using this toggle switch. If you opt out we will not be able to offer you personalised ads and will not hand over your personal information to any third parties. Additionally, you may contact our legal department for further clarification about your rights as a California consumer by using this Exercise My Rights link. If you have enabled privacy controls on your browser (such as a plugin), we have to take that as a valid request to opt-out. Therefore we would not be able to track your activity through the web. This may affect our ability to personalize ads according to your preferences. Targeting Cookies label These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.
188846
https://math.stackexchange.com/questions/1368188/what-is-the-difference-between-stationary-point-and-critical-point-in-calculus
derivatives - What is the difference between stationary point and critical point in Calculus? - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more What is the difference between stationary point and critical point in Calculus? Ask Question Asked 10 years, 2 months ago Modified1 year, 6 months ago Viewed 52k times This question shows research effort; it is useful and clear 10 Save this question. Show activity on this post. What is the difference between stationary point and critical point? We find critical points by finding the roots of the derivative, but in which cases is a critical point not a stationary point? An example would be most helpful. I am asking this question because I ran into the following question: Locate the critical points and identify which critical points are stationary points. So, obviously It's implying that not every critical point is a stationary point. calculus derivatives Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications edited Jul 21, 2015 at 0:49 Mike Pierce 19.6k 12 12 gold badges 72 72 silver badges 143 143 bronze badges asked Jul 20, 2015 at 23:58 user255804user255804 101 1 1 gold badge 1 1 silver badge 3 3 bronze badges 2 They are the same, it's just a matter of context and imagery which one gets used. For example, if you're describing a trajectory, "stationary point" kind of makes more sense, but if you're graphing a function y=f(x)y=f(x), then critical point makes more sense. but the definition is the same.Callus - Reinstate Monica –Callus - Reinstate Monica 2015-07-21 00:07:41 +00:00 Commented Jul 21, 2015 at 0:07 According to some authors at least, a critical point is a point where either f′(x)=0 f′(x)=0 or f f is not differentiable, whereas a stationary point is a point where f f is differentiable and f′(x)=0 f′(x)=0. See mathworld.wolfram.com/CriticalPoint.html and mathworld.wolfram.com/StationaryPoint.html for example.littleO –littleO 2015-07-21 00:09:38 +00:00 Commented Jul 21, 2015 at 0:09 Add a comment| 1 Answer 1 Sorted by: Reset to default This answer is useful 9 Save this answer. Show activity on this post. All stationary points are critical points but not all critical points are stationary points. A more accurate definition of the two: Critical Point: Let f f be defined at c c. Then, we have critical point wherever f′(c)=0 f′(c)=0 or wherever f(c)f(c) is not differentiable (or equivalently, f′(c)f′(c) is not defined). Endpoints of domain (if any) also come under critical points. (The endpoint should be included in the domain) Points where f′(c)f′(c) is not defined are called singular points and points where f′(c)f′(c) is 0 0 are called stationary points. Stationary Point: As mentioned above. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Mar 20, 2024 at 17:43 MathematicianByMistake 5,935 2 2 gold badges 19 19 silver badges 35 35 bronze badges answered Jul 21, 2015 at 1:04 BunnyBunny 3,394 6 6 gold badges 35 35 silver badges 66 66 bronze badges 3 2 To expand on this, a critical point is a place where there is potentially a maximum or a minimum. This can happen if the derivative is zero, or if the function is not differentiable at a point (there could be a vertex as in the absolute value function.) A stationary point is just where the derivative is zero. If you think of the derivative as a velocity, then those are places where the velocity is zero, and something with zero velocity is stationary.Alex Pavellas –Alex Pavellas 2015-07-21 03:55:59 +00:00 Commented Jul 21, 2015 at 3:55 Should it be "wherever f(c)f(c) is not differentiable" instead of "wherever f′(c)f′(c) is not differentiable"?user46234 –user46234 2016-03-17 02:29:53 +00:00 Commented Mar 17, 2016 at 2:29 @KennyLJ Yes, you're absolutely right. Oversight on my part.Bunny –Bunny 2016-03-17 15:03:52 +00:00 Commented Mar 17, 2016 at 15:03 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions calculus derivatives See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Linked 0find the stationary points for f(x)=x 2 3 f(x)=x 2 3.difference between the stationary point and critical point and one more called turning point. Related 3Formal proof of h(x,y)=f(x)+g(y)h(x,y)=f(x)+g(y) has a critical point (x 0,y 0)(x 0,y 0) iff x 0 x 0 is a critical point of f and y 0 y 0 is a critical point of g 0Introductory Calculus: Finding Critical Point using basic methods 3What is the definition of a Critical Point? 1Critical point problem 1"Critical point" - single-variable calculus v.s. differential geometry 1Extremum of a function that is neither a critical point nor a stationary point? ( Analysis) 2Prove that there is at most one root between two stationary points 0Apart from being maxima, minima or inflection point, can a critical point be anything else? 1Stationary point vs extreme points vs critical points Hot Network Questions Is encrypting the login keyring necessary if you have full disk encryption? RTC battery and VCC switching circuit Should I let a player go because of their inability to handle setbacks? Non-degeneracy of wedge product in cohomology How to rsync a large file by comparing earlier versions on the sending end? How to home-make rubber feet stoppers for table legs? Are there any world leaders who are/were good at chess? Discussing strategy reduces winning chances of everyone! Change default Firefox open file directory Why do universities push for high impact journal publications? Can I go in the edit mode and by pressing A select all, then press U for Smart UV Project for that table, After PBR texturing is done? Does the Mishna or Gemara ever explicitly mention the second day of Shavuot? For every second-order formula, is there a first-order formula equivalent to it by reification? Riffle a list of binary functions into list of arguments to produce a result Gluteus medius inactivity while riding How to locate a leak in an irrigation system? Implications of using a stream cipher as KDF alignment in a table with custom separator My dissertation is wrong, but I already defended. How to remedy? How to use \zcref to get black text Equation? Do we declare the codomain of a function from the beginning, or do we determine it after defining the domain and operations? Proof of every Highly Abundant Number greater than 3 is Even What meal can come next? в ответе meaning in context more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
188847
https://physics.stackexchange.com/questions/247649/whats-the-required-tensile-strength-for-a-wire
homework and exercises - What's the required tensile strength for a wire? - Physics Stack Exchange Join Physics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Physics helpchat Physics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more What's the required tensile strength for a wire? Ask Question Asked 9 years, 5 months ago Modified9 years, 5 months ago Viewed 632 times This question shows research effort; it is useful and clear 0 Save this question. Show activity on this post. I want to horizontally stretch a 50 meters wire rope and slide 100 kg object from side to side. What should be the minimum tensile strength (or carrying capacity) of the rope to be able to hold the object while it is placed in the middle of the 50 meters? We can assume the rope's weight is 100 grams/meter. I am sure there's some straightforward formula for calculate this just need some help figuring it out. :) homework-and-exercises newtonian-mechanics string Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Improve this question Follow Follow this question to receive notifications edited Apr 6, 2016 at 6:42 Qmechanic♦ 222k 52 52 gold badges 636 636 silver badges 2.6k 2.6k bronze badges asked Apr 5, 2016 at 19:51 isHristovisHristov 103 1 1 bronze badge 3 1 Draw a force diagram (this is Physics 101 stuff) - the required tensile strength required depends strongly on how much sag you allow in your wire rope at the center...Jon Custer –Jon Custer 2016-04-05 20:32:26 +00:00 Commented Apr 5, 2016 at 20:32 See related answer physics.stackexchange.com/a/131444/392John Alexiou –John Alexiou 2016-04-05 22:31:43 +00:00 Commented Apr 5, 2016 at 22:31 Another related answer physics.stackexchange.com/a/78914/392John Alexiou –John Alexiou 2016-04-05 22:33:37 +00:00 Commented Apr 5, 2016 at 22:33 Add a comment| 1 Answer 1 Sorted by: Reset to default This answer is useful 1 Save this answer. Show activity on this post. Actually to get an exact expression for a hanging wire with a lumped mass somewhere in the middle is extremely hard (maybe impossible). Another thing to consider is the allowable sag of the rope. If the rope can sag a lot then the tension can be low, whereas if low sag is required the tension has to be really high. In addition as the tension increases the rope stretches increasing the sag but relieving the tension. It is a rather complex problem overall, I can provide an approximate expression when the lumped weight W W is located in the middle of the span S S. Also important is the unit weight w=m g ℓ w=m g ℓ. I also consider the rope to be inflexible. The tension is split into the horizontal tension component H H which is constant thought the rope, and the total tangential tension T T which increases the further away you go from the sag point. The total sag amount is D D and the relationships between H H, D D are T T are: T D=H cosh(w S 2 H)+W 2 sinh(w S 2 H)=H w(cosh(w S 2 H)−1)+W 2 w sinh(w S 2 H)T=H cosh⁡(w S 2 H)+W 2 sinh⁡(w S 2 H)D=H w(cosh⁡(w S 2 H)−1)+W 2 w sinh⁡(w S 2 H) A further approximation of the above can be down when H≫w S 2 H≫w S 2 T D=8 H 2+w 2 S 2+2 S W w 8 H=w S 2 8+S W 4 H T=8 H 2+w 2 S 2+2 S W w 8 H D=w S 2 8+S W 4 H Or by solving the last one for the horizontal tension H H T=w D+S 4 D(W−w S 2)T=w D+S 4 D(W−w S 2) In your case S=50 S=50, w=0.1×9.81 w=0.1×9.81, W=100×9.81 W=100×9.81, so for D=3 D=3 meter sag, the tension is T=3988 T=3988 newtons for example. Your approximate design expression is T=11,955 D+(0.981 D)T=11,955 D+(0.981 D) Edit 1 Upon further examination it can be said that worst tension occurs when the weight is at one end. There you add the vertical tension and the weight and vectorially combine the horizontal tension for the worst tension T=(V+W)2+H 2−−−−−−−−−−−−−√T=(V+W)2+H 2 The vertical tension at the end of a cable is V=H sinh(w S 2 H)V=H sinh⁡(w S 2 H) The horizontal tension is found (numerically) from the measured sag (without the weight). D=H w(cosh(w S 2 H)−1)D=H w(cosh⁡(w S 2 H)−1) Example You string the rope with D=1 D=1 meter sag. From the above equation you find the horizontal tension to be H=306.7 H=306.7 newtons. You can use Wolfram Alpha, Excel or any numeric solver you can find for this step. 1=H 0.1×8.81(cosh(0.1×9.81×50 2 H)−1)1=H 0.1×8.81(cosh⁡(0.1×9.81×50 2 H)−1) The vertical tension on one end is then V=306.7 sinh(0.1×9.81×50 2×306.7)=24.55 V=306.7 sinh⁡(0.1×9.81×50 2×306.7)=24.55 With a weight of W=100×9.81=981 W=100×9.81=981 the total tension on the end is T=(24.55+981)2+306.7 2−−−−−−−−−−−−−−−−−−−√=1051.3 T=(24.55+981)2+306.7 2=1051.3 Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Improve this answer Follow Follow this answer to receive notifications edited Apr 6, 2016 at 15:52 answered Apr 5, 2016 at 23:47 John AlexiouJohn Alexiou 40.5k 6 6 gold badges 74 74 silver badges 187 187 bronze badges 3 Whoa... that's some answer! Obviously there's no a straightforward formula. :) Thanks for taking the time. I'm having trouble understanding what exactly is the horizontal tension component H. You said it is constant through the rope, but how do you measure it?isHristov –isHristov 2016-04-06 09:20:20 +00:00 Commented Apr 6, 2016 at 9:20 See edit now for a working example.John Alexiou –John Alexiou 2016-04-06 12:15:25 +00:00 Commented Apr 6, 2016 at 12:15 I think I reversed the sense of W W in the first equations (you know positive force acting upwards). I will edit and change the sign to reflect the point that W W points downwards.John Alexiou –John Alexiou 2016-04-06 15:51:15 +00:00 Commented Apr 6, 2016 at 15:51 Add a comment| Your Answer Thanks for contributing an answer to Physics Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. Use MathJax to format equations. MathJax reference. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions homework-and-exercises newtonian-mechanics string See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Linked 1Massive ropes and tension 1Can you take components of tension in a string? Related 6Nonuniform acceleration due to rubber rope 1Modeling the system as an equation 1Breaking strength of cord when doubled over a pulley 2Issue regarding Newton's second law 1Origins of Tension 2I can't understand how the lever gives a gain in force and where it comes from Hot Network Questions Alternatives to Test-Driven Grading in an LLM world Numbers Interpreted in Smallest Valid Base Do we need the author's permission for reference Spectral Leakage & Phase Discontinuites Interpret G-code Where is the first repetition in the cumulative hierarchy up to elementary equivalence? Does the Mishna or Gemara ever explicitly mention the second day of Shavuot? How to use \zcref to get black text Equation? Overfilled my oil Clinical-tone story about Earth making people violent Is existence always locational? The geologic realities of a massive well out at Sea PSTricks error regarding \pst@makenotverbbox Can I go in the edit mode and by pressing A select all, then press U for Smart UV Project for that table, After PBR texturing is done? What's the expectation around asking to be invited to invitation-only workshops? What is this chess h4 sac known as? Are there any world leaders who are/were good at chess? Why do universities push for high impact journal publications? Repetition is the mother of learning How do you emphasize the verb "to be" with do/does? Gluteus medius inactivity while riding How long would it take for me to get all the items in Bongo Cat? Non-degeneracy of wedge product in cohomology Origin of Australian slang exclamation "struth" meaning greatly surprised Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Physics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
188848
https://tjfisher19.github.io/introStatModeling/model-validation.html
Chapter 10 Model Validation At this point we have covered various concepts of statistical modeling but one fundamental question remains, “Is my model any good?” Answering this question is of fundamental importance and there is no single way to determine the appropriateness of a model. We have covered techniques such as R2, R2adj and AIC but all three of those measures essentially measure how well the fitted model fits the data you used to make the fit. Generally speaking, we should consider other measures to validate our models as well. 10.1 Underfitting vs. Overfitting Models We have seen that increasing the complexity of a statistical model will always imporove the explanatory power on a response variable. This is seen by the fact that R2 will always imporve as the number of predictors increases. In fact, you can show that if a regression model has k=n parameters (i.e., if the number of β-coefficients in the model is the same as the sample size of the data it is being fit to), then you guarantee R2 = 100%! Of course, all this means is that such a model perfectly fits the data it was built with. At the same time, this model may be a very poor predictor of new (unobserved) individuals. So, using the same data to assess the quality of a model is not exactly a great way to assess its predictive performance. Consider the following three models fit to the same data points: The model on the left is underfit: it misses the clear curvature in the relationship between the predictor and response. So, from a trend perspective, it would systematically mispredict future observations that are produced by the same process. The model on the right, however, is overfit: if you look closely, you’ll see that it falls much closer to the observed data values than either of the other two models. So this overly complicated model can predict its OWN data very well – but that is because the model you see is catching a lot of the specific random behavior in this particular data set. It isn’t hard to imagine that a new data set, generated by the same process but exhibiting different random behavior, would be poorly predicted by even this complicated model! The model in the middle appears to be the preferred one to generalize and make predictions from. This is because it captures the systemic trend in the predictor/response relationship, but that’s all. It strikes a happy medium between two situations that can lead to poor predictive performance of a model on future observations. 10.1.1 The Bias-Variance Trade-off So here is the dilemma: We want to avoid overfitting because it gives too much predictive power to specific quirks in our data. We want to avoid underfitting because we will ignore important general features in our data. How do we balance the two? This is known as the bias-variance trade-off. Bias corresponds to underfitting (our predictions are too vague to account for general pattern that do exist in the sample) and variance corresponds to overfitting (our predictions as so specific that they only reflect our specific sample). 10.2 Validation Techniques Overfitting results in low prediction error on the observed data but high prediction error on new (unobserved) data, while underfitting results in the opposite. Measuring these type of errors can be accomodated using a process called cross-validation (CV). CV comprises a set of techniques that enable us to measure the performance of a statistical model with regard to how well it can predict results in new datasets. There are three general approaches to this, which are detailed below. They all involve splitting your data into partitions, and using some part(s) of the the data to build models and the remaining part(s) to test your model’s predictive performance. It should go without saying that the methods we discuss here require that you have fairly large data sets (many cases, large n) so that we have ample information with which to build models. 10.3 Basic Validation with a single holdout sample The most basic idea behind CV involves dividing your data into two partitions, or subsets: A training set on which we build our model. It is called a training set because we use this data partition to train the model for prediction. A test set (or validation set) which is used to test our model by estimating the prediction error resulting from predicting new observations. Commonly used split proportions in practice are 80% for training data and 20% for test data, though this can be altered. To assess model predictive performance, a good choice is to look at the model’s residual standard error as calculated on the test data. Formulaically, the residual standard error is the square root of the mean squared prediction error values. Example: Estimating Bodyfat Percentage. Let’s revisit the bodyfat percentage problem from Chapter 9 in the textbook. Recall that the goal was to develop a model that would do well at predicting a man’s bodyfat percentage by simply taking some selected circumference measurements around their body. Let’s use this data to do some model validation. The first thing we need to do is split the data set consisting of n = 252 men into a training set and a test set. This is done using the code below, creating a random 80/20 training/test split using R’s sample function: ``` set.seed(54321) # Set a seed value for reproducability purposes in this document Randomly select 80% of the data (rows) index <- sample(1:nrow(bodyfat), size=floor(.80nrow(bodyfat))) train <- bodyfat %>% filter(row_number() %in% index) test <- bodyfat %>% filter(!row_number() %in% index) ``` The vector index consists of randomly selecting row numbers (cases) from the bodyfat dataset. The size of the selection is set to be 80% of the number of rows in the datset itself. Below we check to see how many cases landed in each of the training and test data sets: nrow(bodyfat) nrow ``` 252 ``` nrow(train) nrow ``` 201 ``` nrow(test) nrow ``` 51 ``` We have split the original sample of n = 252 men into a training set of 201 men and a test (validation) set of 51 men. 10.3.1 Use the training data to fit and select models We now use the training data (named train by us above) to build our model. We can use any or all of the techniques we have already covered to this point to build (“train”) our model: stepwise regression, variable deletion, transformations, etc. We use a best-subsets approach below, much like we did back in Chapter 9. It is critical to remember that everything we do in the model fitting stage is done on the training data only. 10.3.2 Model training: Check various models’ performance based on R2adj and BIC: bodyfat.gsub <- regsubsets(bodyfat.pct ~ ., data=train, nbest=4, nvmax=13) stats <- summary(bodyfat.gsub) gsub.df <- data.frame(Model.Number=1:length(stats$adjr2), Adjusted.R2=stats$adjr2, BIC=stats$bic) p1 <- ggplot(gsub.df, aes(x=Model.Number, y=Adjusted.R2)) + geom_line() + geom_point(color="red", size=2) + theme_minimal() + ylab("Adjusted R-squared") + xlab("Model Number") p2 <- ggplot(gsub.df, aes(x=Model.Number, y=BIC)) + geom_line() + geom_point(color="red", size=2) + theme_minimal() + ylab("BIC") + xlab("Model Number") grid.arrange(p1,p2, nrow=2) The estimated β-coefficients for the predictors of the best fitting model based on maximizing R2adj are as follows: coef(bodyfat.gsub, which.max(gsub.df$Adjusted.R2)) coefwhich.max $ ``` (Intercept) weight neck abdomen ankle biceps -36.886823 -0.151137 -0.370366 1.005815 0.360620 0.371678 forearm wrist 0.464346 -1.609547 ``` We see that this criterion selects a 10-predictor model. While this is OK, it would involve a lot of body measuring in practice, and so might not be the best choice from an implementation prespective. Its adjusted R2adj is found as follows: max(gsub.df$Adjusted.R2) max $ ``` 0.731449 ``` Now, let’s look at what model is selected as optimal when minimizing BIC: coef(bodyfat.gsub, which.min(gsub.df$BIC)) coefwhich.min $ ``` (Intercept) weight abdomen forearm wrist -32.562236 -0.122294 0.969583 0.539201 -1.721664 ``` The best fitting model based on BIC has four predictors: weight, abdomen, biceps, and wrist. As a corrolary assessment, we can check the value of R2adj for this 4-predictor model: gsub.df$Adjusted.R2[which.min(gsub.df$BIC)] $which.min $ ``` 0.725006 ``` The best fitting model based on BIC is much simpler (4 predictors instead of 10), and its adjusted R2adj is not noticeably lower than the 10-predictor model (0.7427 vs. 0.7549). So, let’s choose to use the four predictor model. 10.3.3 Model validation step: Now, let’s use this model to predict bodyfat percentages for the men in the holdout (test) dataset. First we fit the chosen model on the training dataset. Then we use that model to predict the holdout values in the testing set. fit1 <- lm(bodyfat.pct ~ weight + abdomen + biceps + wrist, data=train) test.predictions <- predict(fit1, newdata=test) The residual standard error (or equivalently, the square root of the mean squared residuals – or root mean squared error) can be calculated on the test data to see how well our model predicts future observations. Below we manually calculate this value for explanation. ``` Calculate observed - predicted bodyfat for test data residuals <- test$bodyfat.pct - test.predictions Calculate and display the residual std error test.rse <- sqrt(mean(residuals^2)) test.rse ``` ``` 4.48002 ``` This is an estimate of the average residual (prediction error) size for individuals in an independent sample of men. Using an Empirical Rule-style argument, we can be about 95% confident that our model will produce a predicted male bodyfat percentage that is within about 2×4.68=9.36% of the actual value. Compare this result to the artificially optimistic residual standard error we get by naively predicting the results for the same men we used to fit the model, using all the original data: ``` Fit model to all the data full.sample.fit <- lm(bodyfat.pct ~ weight + abdomen + biceps + wrist, data=bodyfat) Predict all men in the same sample, and calculate their residuals and residual std error full.predictions <- predict(full.sample.fit, newdata=bodyfat) residuals <- bodyfat$bodyfat.pct - full.predictions full.rse <- sqrt(mean(residuals^2)) full.rse ``` ``` 4.31743 ``` The result might look better, but it is biased toward the sample it came from! 10.4 Hold-out sample validation using caret The above example was quite involved and done so for the sake of explanation. We can instead use some features in the tidyverse and the add-on caret library to effectively do the same thing. ``` Create a balanced 80/20 split of the sample based on the response variable train.index <- bodyfat$bodyfat.pct %>% createDataPartition(p = 0.8, list = FALSE) train.data <- bodyfat %>% filter(row_number() %in% train.index) # 80% goes into training data test.data <- bodyfat %>% filter(!row_number() %in% train.index) # The rest goes into test data Verify balance between training and test data on the response variable: ggplot() + geom_density(data=train.data, aes(x = bodyfat.pct), fill = "#00BCD8", alpha = 0.3) + geom_density(data=test.data, aes(x = bodyfat.pct), fill = "#F8766D", alpha = 0.3) ``` ``` Build/fit our model fitted.model <- lm(bodyfat.pct ~ weight + abdomen + biceps + wrist, data=train.data) Make predictions and compute RMSE and MAE predictions <- fitted.model %>% predict(test.data) data.frame(RMSE = RMSE(predictions, test.data$bodyfat.pct), MAE = MAE(predictions, test.data$bodyfat.pct)) ``` ``` RMSE MAE 1 4.32594 3.58625 ``` The caret package can easily provide us with the root mean squared error RMSE (residual standard error) like before, but can also provide other measures of model performance. Two measures are typically employed: Root Mean Squared Error (RMSE), which measures the average prediction error made by the model when predicting the outcome for a future observation. This is what we have already calculated. Lower values of RMSE are better. Mean Absolute Error (MAE). This is an alternative to RMSE that is less sensitive to outliers in your data. It corresponds to the average absolute difference between observed and predicted outcomes. Lower values of MAE are also better. Both of these are meassured in the same units as the response variable. In the above example, the mean absolute error in bodyfat prediction is 3.265% and the root means squared error is 3.9% (note this is different than our previous derivation because a different training and testing set were declared). Disadvantages. The single holdout sample method is only useful when you have a large data set that can be partitioned. The key disadvantage, however, is that we build a model only using a fraction of the available data, which may possibly leave out some interesting information, leading to higher bias. We try to mitigate this by ensuring the training and test data sets are balanced with respect to the response variable (using createDataPartition from the caret package), but the potential for bias still exists as it could come from imbalance among the predictors, etc. As a result, test error rates using a single holdout sample can be highly unstable depending on which observations are included in the training set. So, we’d like to consider using more comprehensive approaches. 10.5 “Leave one out” Cross-Validation (LOOCV) More comprehensive methods involve doing the training/testsing in multiple stages across many partitions in the data. One such method that admittedly takes the proces to an extreme is called “leave-one-out” cross-validation. This method is as follows: Leave out one data point (observation) and build the model on the remaining n−1 data points. Use the model from step 1 to predict the single data point that was left out. Record the test error associated with this prediction. Repeat the process above for all n data points. This means you will fit n models! Calculate the overall prediction error by averaging all the test errors recorded for the individual points through the process. We can still use RMSE or MAE for this. This is a very intensive process as you might imagine, but it is actually very easy to do in caret: ``` Set training control method as LOOCV train.control <- trainControl(method = "LOOCV") Train the model LOOCV.model <- train(bodyfat.pct ~ weight + abdomen + biceps + wrist, data = bodyfat, method = "lm", trControl = train.control) Display results LOOCV.model ``` ``` Linear Regression 252 samples 4 predictor No pre-processing Resampling: Leave-One-Out Cross-Validation Summary of sample sizes: 251, 251, 251, 251, 251, 251, ... Resampling results: RMSE Rsquared MAE 4.43271 0.718507 3.65969 Tuning parameter 'intercept' was held constant at a value of TRUE ``` The function trainControl() defines the partitioning method for the coming validation step. Then we use the train() function in caret to execute the process. We first specify the model form (we are still choosing to use the 4-predictor model here), run the process on the master data set (here, bodyfat), use linear regression (method="lm"), and specify the partitioning method to use. The resulting output list the RMSE and MAE, along with a pseudo R-squared type measure (looking at predictor error compared to variance in the response). LOOCV Disadvantages. On the surface, LOOCV seems like it might be the preferred approach since we use almost all the data (n−1 data points) to independently predict every possible holdout. However, it can be computationally intensive, and might result in higher variation in the prediction error if some data points are outliers. So, ideally we will use a good ratio of testing data points … a solution provided by the next method, which is a sort of compromise between a single holdout sample and LOOCV. 10.6 k-fold Cross-Validation The k-fold cross-validation method evaluates model performance on different subsets of the training data and then calculates the average prediction error rate across all the different subsets. We purposefully choose a number of subsets, known as folds of the data, into which we partition the data. The process is as follow: Randomly split the data into k subsets, or folds. Note that k is determined by the user; typically a value of 5 or 10 is used. Hold out one fold, and train the model on all other folds combined Test the model on the held-out fold and record the prediction errors Repeat this process until each of the k folds has served as a test set Calculate the average of the k recorded errors. This is will be the performance measure for the model. Visually, the folds can be thought of as something like the below example, a k=10 fold segmentation. The most obvious advantage of k-fold CV compared to LOOCV is computational. The question is: what is a good value for k? Consider the following: A low value of k (few folds) leads to more bias potential. It is not hard to see that using a small value of k is not that much different than just using the first method described, a single holdout sample. A high value of k (many folds) leads to more variance in the prediction error. In fact, if k=n, then you are doing LOOCV. It has been shown in practice that using k = 5 or 10 yields test error rates that do not appreciably suffer from excessive bias nor high prediction error variance. The following example uses caret to perform a 5-fold cross validation to estimate the prediction error for predicting bodyfat percentage in men from the weight, abdomen, biceps, and wrist measurement predictors. ``` Set training control method as 5-fold CV train.control <- trainControl(method = "cv", number = 5) Train the model kfoldCV.model <- train(bodyfat.pct ~ weight + abdomen + biceps + wrist, data = bodyfat, method = "lm", trControl = train.control) Display results kfoldCV.model ``` ``` Linear Regression 252 samples 4 predictor No pre-processing Resampling: Cross-Validated (5 fold) Summary of sample sizes: 202, 202, 202, 202, 200 Resampling results: RMSE Rsquared MAE 4.41851 0.731733 3.64661 Tuning parameter 'intercept' was held constant at a value of TRUE ``` Here we see similar output as the LOOCV but the procedure runs substantially quicker. k-fold cross-validation is generally recommended over the other two methods in practice due to its balance between variability, bias and computational run-time. 10.7 A final note As with many topics in this text, we are merely scratching the surface on this topic. However, we have outlined the basic building blocks on modern model validation. Several variants exist including Segmenting your data into training, tuning and testing sets. Cconsider the brief discussion on selecting a tuning parameter for LASSO and Ridge regression in the previous chapter. The tuning set can be used to select that parameter, then the final model can be validated with the testing set. Repeating k-fold CV multiple times. Performing k-fold CV one time is not too different than the single training and testing set approach, data is segmented randomly into sets. A different random permutation will result in a different RMSE (we saw this above!). It is possible to repeat the k-fold CV multiple times and aggregate all the results. If computational power allows, this is typically done in practice.
188849
https://www.pearson.com/channels/microeconomics/learn/brian/ch-12-monopoly/efficiency-and-deadweight-loss
Skip to main content My Courses Chemistry General Chemistry Organic Chemistry Analytical Chemistry GOB Chemistry Biochemistry Intro to Chemistry Biology General Biology Microbiology Anatomy & Physiology Genetics Cell Biology Physics Physics Math College Algebra Trigonometry Precalculus Calculus Business Calculus Statistics Business Statistics Social Sciences Psychology Health Sciences Personal Health Nutrition Business Microeconomics Macroeconomics Financial Accounting Product & Marketing Agile & Product Management Digital Marketing Project Management AI in Marketing Programming Introduction to Python Microsoft Power BI Data Analysis - Excel Introduction to Blockchain HTML, CSS & Layout Introduction to JavaScript R Programming Calculators AI Tools Study Prep Blog Study Prep Home Table of contents Skip topic navigation Skip topic navigation Basic Principles of Economics 1h 5m Introduction to Economics 3m + People Are Rational 2m + People Respond to Incentives 1m + Scarcity and Choice 2m + Marginal Analysis 9m + Allocative Efficiency, Productive Efficiency, and Equality 7m + Positive and Normative Analysis 7m + Microeconomics vs. Macroeconomics 2m + Factors of Production 5m + Circular Flow Diagram 5m + Graphing Review 10m + Percentage and Decimal Review 4m + Fractions Review 2m 1. Reading and Understanding Graphs 59m Graphs of Two Variables 4m + Relationships between Variables 7m + Interpreting Graphs, Correlation, Causation, and Omitted Variables 6m + Slope of Linear Graphs 7m + Slope of a Curve at a Point 3m + Finding the Maximum and Minimum Points on Graphs 2m + Percentage and Decimal Review 5m + Fractions Review 12m + Areas of Shapes 8m 2. Introductory Economic Models 1h 10m Production Possibilities Frontier (PPF) - Introduction and Productive Efficiency 18m + PPF - Increasing Marginal Opportunity Costs and Allocative Efficiency 13m + PPF - Outward Shifts 8m + PPF - Comparative Advantage and Absolute Advantage 18m + PPF - Comparative Advantage and Trade 7m + PPF - The Price of the Trade 3m 3. The Market Forces of Supply and Demand 2h 26m Competitive Markets 10m + The Demand Curve 13m + Shifts in the Demand Curve 24m + Movement Along a Demand Curve 5m + The Supply Curve 9m + Shifts in the Supply Curve 22m + Movement Along a Supply Curve 3m + Market Equilibrium 8m + Using the Supply and Demand Curves to Find Equilibrium 3m + Effects of Surplus 3m + Effects of Shortage 2m + Supply and Demand: Quantitative Analysis 40m 4. Elasticity 2h 26m Percentage Change and Price Elasticity of Demand 19m + Elasticity and the Midpoint Method 20m + Price Elasticity of Demand on a Graph 11m + Determinants of Price Elasticity of Demand 6m + Total Revenue Test 13m + Total Revenue Along a Linear Demand Curve 14m + Income Elasticity of Demand 23m + Cross-Price Elasticity of Demand 11m + Price Elasticity of Supply 12m + Price Elasticity of Supply on a Graph 3m + Elasticity Summary 9m 5. Consumer and Producer Surplus; Price Ceilings and Floors 3h 45m Consumer Surplus and Willingness to Pay 38m + Producer Surplus and Willingness to Sell 26m + Economic Surplus and Efficiency 18m + Quantitative Analysis of Consumer and Producer Surplus at Equilibrium 28m + Price Ceilings, Price Floors, and Black Markets 38m + Quantitative Analysis of Price Ceilings and Price Floors: Finding Points 20m + Quantitative Analysis of Price Ceilings and Price Floors: Finding Areas 54m 6. Introduction to Taxes and Subsidies 1h 46m Introducing Taxes and Tax Incidence 19m + Effects of Taxes on a Market 12m + Elasticity and Taxes 14m + Subsidies 16m + The Laffer Curve 9m + Quantitative Analysis of Taxes 13m + Tax Efficiency 9m + Tax Equity 11m 7. Externalities 1h 12m Externalities: Social Benefits and Social Costs 28m + Public Solutions to Externalities 26m + Private Solutions to Externalities: The Coase Theorem 17m 8. The Types of Goods 1h 13m Four Types of Goods and Two Characteristics 28m + The Free Rider Problem and the Tragedy of the Commons 22m + Public Goods: Demand Curve and Optimal Quantity 23m 9. International Trade 1h 16m Exporting and Importing 16m + Sources of Comparative Advantage 6m + Tariffs on Imports 21m + Import Quotas and VERs 23m + Arguments Against International Trade 7m 10. The Costs of Production 2h 35m Revenue, Cost, and Profit 30m + The Production Function and Diminishing Returns 12m + Marginal Cost 27m + The Relationship Between Average Cost and Marginal Cost 23m + Graphing Costs 17m + Average Total Cost: Short Run and Long Run 19m + Isoquant Lines 9m + Isocost Lines 7m + Cost-Minimizing Combination of Labor and Capital 6m 11. Perfect Competition 2h 23m Introduction to the Four Market Models 2m + Characteristics of Perfect Competition 6m + Revenue in Perfect Competition 14m + Perfect Competition Profit on the Graph 20m + Short Run Shutdown Decision 33m + Long Run Entry and Exit Decision 18m + Individual Supply Curve in the Short Run and Long Run 6m + Market Supply Curve in the Short Run and Long Run 9m + Long Run Equilibrium 12m + Perfect Competition and Efficiency 15m + Four Market Model Summary: Perfect Competition 5m 12. Monopoly 2h 13m Characteristics of Monopoly 21m + Monopoly Revenue 12m + Monopoly Profit on the Graph 16m + Monopoly Efficiency and Deadweight Loss 20m + Price Discrimination 22m + Antitrust Laws and Government Regulation of Monopolies 11m + Mergers and the Herfindahl-Hirschman Index (HHI) 17m + Four Firm Concentration Ratio 6m + Four Market Model Summary: Monopoly 4m 13. Monopolistic Competition 1h 9m Characteristics of Monopolistic Competition 10m + Revenue in Monopolistic Competition 14m + Monopolistic Competition Profit on the Graph 13m + Monopolistic Competition in the Long Run 14m + Efficiency in Monopolistic Competition 8m + Advertising 4m + Four Market Model Summary: Monopolistic Competition 3m 14. Oligopoly 1h 26m Characteristics of Oligopoly 16m + One-Time Games and the Prisoner's Dilemma 30m + Game Theory and Oligopoly Profit 16m + Repeated Games 5m + Kinked-Demand Theory 13m + Four Market Model Summary: Oligopoly 3m 15. Markets for the Factors of Production 1h 26m The Production Function and Marginal Revenue Product 16m + Demand for Labor in Perfect Competition 7m + Shifts in Labor Demand 13m + Supply of Labor in Perfect Competition 7m + Shifts in Labor Supply 5m + Differences in Wages 6m + Other Factors of Production: Land and Capital 5m + Unions 6m + Monopsony 11m + Bilateral Monopoly 5m 16. Income Inequality and Poverty 35m Income Inequality in the USA and Worldwide 1m + Poverty 5m + Polices to Reduce Poverty 6m + Lorenz Curve and Gini Coefficient 16m + Income Equality and Efficiency 6m 17. Asymmetric Information, Voting, and Public Choice 39m Asymmetric Information: Adverse Selection and Moral Hazard 11m + Solutions to Informational Problems 11m + Median Voter Theorem 9m + Condorcet Voting Paradox 7m 18. Consumer Choice and Behavioral Economics 1h 16m Budget Constraint 27m + Indifference Curves 25m + Indifference Curves for Perfect Substitutes and Perfect Complements 4m + Consumer Optimum Consumption: Budget Constraint and Indifference Curves 7m + Consumer Optimum Consumption: Marginal Utility per Dollar Spent 13m Monopoly Monopoly Efficiency and Deadweight Loss Monopoly Monopoly Efficiency and Deadweight Loss: Videos & Practice Problems Video Lessons Practice Worksheet Topic summary In a monopoly, the quantity produced is always less than the efficient quantity, leading to deadweight loss. Consumer surplus decreases due to higher prices, while producer surplus may increase, but not without losses in potential trades. Monopolies lack both productive efficiency, as they do not produce at minimum average total cost, and allocative efficiency, since marginal benefits exceed marginal costs. Understanding these concepts is crucial for grasping market structures and their implications on economic surplus and efficiency. Monopolies do not produce the efficient quantity. 1 concept Monopoly Efficiency and Deadweight Loss Video duration: 12m Play a video: Monopoly Efficiency and Deadweight Loss Video Summary The concepts of consumer and producer surplus are crucial in understanding market structures, particularly in monopolies compared to perfect competition. In a perfectly competitive market, firms produce at an efficient quantity where marginal cost equals marginal revenue, leading to maximum total surplus without any deadweight loss. Here, consumer surplus is represented by the area above the price and below the demand curve, while producer surplus is the area below the price and above the supply curve. The total surplus in this scenario is the sum of consumer and producer surplus, encompassing all trades made at the efficient quantity. In contrast, a monopoly restricts output to maximize profits, resulting in a lower quantity produced than the efficient level. This leads to a higher price for consumers, which reduces consumer surplus. The consumer surplus in a monopoly is diminished as consumers lose the surplus represented by the areas that are no longer accessible due to the higher price and lower quantity. The producer surplus, however, may increase as the monopoly can charge a higher price for the limited quantity sold, but this comes at the cost of lost trades that would have occurred at a lower price. The deadweight loss in a monopoly arises from the trades that do not occur due to the restricted quantity. This loss is represented by the areas of surplus that are no longer realized, specifically the areas that would have contributed to total surplus in a competitive market. Thus, the total surplus in a monopoly is reduced, highlighting the inefficiencies inherent in monopolistic structures. When discussing efficiency, it is essential to differentiate between productive and allocative efficiency. Productive efficiency occurs when goods are produced at the lowest possible cost, typically at the minimum point of the average total cost curve. In a monopoly, firms do not achieve this efficiency as they operate on the downward-sloping part of the average total cost curve, failing to produce at the minimum cost. Allocative efficiency, on the other hand, is achieved when production aligns with consumer preferences, where the price equals the marginal cost. Monopolies do not reach allocative efficiency either, as they produce less than the efficient quantity, leading to a situation where the marginal benefit to consumers exceeds the marginal cost of production. In summary, monopolies create a scenario where both productive and allocative efficiencies are absent, resulting in a decrease in total surplus and the presence of deadweight loss. Understanding these concepts is vital for analyzing the implications of monopolistic practices on consumer welfare and market efficiency. Study Smarter with Worksheets. Follow along with each video using our printable worksheets 2 Problem An unregulated monopoly will sell: A 30 tickets B 50 tickets C 60 tickets D 100 tickets 3 Problem If the monopolist's fixed cost is $25, the monopoly's total costs when maximizing profit is: A $35 B $45 C $85 D $145 4 Problem If the monopolist's fixed cost is $25, the monopoly's total economic profit when maximizing profit is: A $0 B $20 C $45 D The monopoly is incurring a loss 5 Problem The deadweight loss created by the monopoly is: A $0 B $22.5 C $45 D $90 Do you want more practice? More sets Monopoly Efficiency and Deadweight Loss Monopoly 10 problems Topic 12. Monopoly 9 topics 15 problems Chapter Go over this topic definitions with flashcards More sets Monopoly Efficiency and Deadweight Loss quiz #1 Monopoly 10 Terms Take your learning anywhere! Prep for your exams on the go with video lessons and practice problems in our mobile app. Here’s what students ask on this topic: Deadweight loss in a monopoly refers to the loss of economic efficiency that occurs when the equilibrium quantity of a good is not produced. In a monopoly, the firm restricts output to maximize profits, producing less than the efficient quantity. This results in a higher price and a reduction in consumer surplus. The trades that do not occur due to this restricted output represent the deadweight loss. Mathematically, deadweight loss can be visualized on a graph as the area between the demand and supply curves, from the monopoly quantity to the efficient quantity. In a monopoly, consumer surplus decreases because the monopolist sets a higher price and produces a lower quantity compared to a perfectly competitive market. Consumer surplus is the area above the price and below the demand curve, and this area shrinks in a monopoly. Producer surplus, which is the area below the price and above the supply curve, may increase because the monopolist can charge a higher price. However, the total economic surplus (consumer surplus + producer surplus) decreases due to the deadweight loss, which represents the trades that do not occur. Monopolies lack productive efficiency because they do not produce at the minimum average total cost (ATC). Productive efficiency occurs when a firm produces at the lowest possible cost, which is at the minimum point of the ATC curve. In a monopoly, the firm produces at a quantity where the ATC is not minimized, often on the downward-sloping part of the ATC curve. This results in higher production costs and inefficiency compared to a perfectly competitive market where firms produce at the minimum ATC. Allocative efficiency occurs when the production of goods and services aligns with consumer preferences, meaning the marginal benefit to consumers equals the marginal cost of production. In a monopoly, allocative efficiency is not achieved because the monopolist produces less than the efficient quantity. The marginal benefit to consumers exceeds the marginal cost, indicating that more of the good should be produced to meet consumer demand. This misalignment results in a deadweight loss and a loss of economic welfare. A monopoly creates deadweight loss by producing less than the efficient quantity of a good. The monopolist restricts output to raise prices and maximize profits, leading to a quantity that is lower than what would be produced in a perfectly competitive market. This restriction results in some mutually beneficial trades not occurring, which is represented by the deadweight loss. On a graph, deadweight loss is the area between the demand and supply curves from the monopoly quantity to the efficient quantity, indicating the loss of total economic surplus. Your Microeconomics tutor Brian Krogol Microeconomics, Macroeconomics and Financial Accounting lead instructor
188850
https://www.sciencedirect.com/science/article/abs/pii/S0263931924000371
Skip to article My account Sign in Access through your organization Purchase PDF Patient Access Article preview Abstract Introduction Section snippets Surgery (Oxford) Volume 42, Issue 6, June 2024, Pages 355-363 Basic Science Clinically applied anatomy of the vertebral column Author links open overlay panel rights and content Abstract The vertebral column (spinal column, spine, or backbone) forms the central axis of the body€™s skeleton. It supports the skull superiorly and participates in the formation of the pelvis inferiorly. The vertebral column comprises the following five regions in cephalocaudal sequence: cervical, thoracic, lumbar, sacral and coccygeal. The vertebral column contains the spinal cord within the vertebral canal, protecting the spinal cord from external trauma. Optimal medical and surgical management of spinal disease is crucially dependent on accurate clinical and radiological diagnosis, which in turn are reliant on a sound understanding of the structural and functional anatomy of the vertebral column. In this article a general description of the articulated vertebral column is followed by a description of the morphology of representative vertebrae from the vertebral regions. Introduction The vertebral column extends from the base of the skull superiorly (supporting the skull) to the midline region of the buttocks inferiorly, where it articulates with the right and left hip (innominate) bones of the pelvic girdle. It makes up about two-fifths of the total height of the body in the adult and is a very strong yet flexible midline strut that contains and protects the spinal cord within its vertebral canal. The vertebral column is formed by a vertical chain of bony vertebrae, joined in life by various joints, ligaments and muscles. It forms the skeleton of the neck as well as the midline, vertically oriented portion of the skeleton of the trunk (the rest of the skeleton of the trunk is formed by the ribs and sternum). For descriptive purposes the vertebral column is divided into five regions, each containing vertebrae with distinct morphological, and thus functional, characteristics. These regions are the cervical in the neck, and the thoracic, lumbar, sacral and coccygeal in the trunk. A total of 33 vertebrae €“ 7 cervical, 12 thoracic, 5 lumbar, 5 sacral, and 4 coccygeal vertebrae form the vertebral column. Individual vertebrae in the cervical, thoracic and lumbar regions can move relative to their neighbouring vertebrae. However, the sacral and coccygeal vertebrae are fused and thus immovable (Figure 1). Running along the length of the entire vertebral column on either side of the midline, from sacrum to skull, is a bulky and layered arrangement of powerful muscles. These muscles are principally extensors of the vertebral column (and extensors of the skull on the vertebral column) and are innervated segmentally by the posterior rami of spinal nerves. They are important postural muscles. The rest of this article will focus on the bones and joints of the vertebral column and will not cover the anatomy of the related vertebral muscles. The vertebral column in a fetus is curved in the sagittal (midline) plane in such a fashion that it is concave ventrally (anteriorly) and convex dorsally (posteriorly). This curvature, marked by an anterior concavity, is termed the primary curvature of the vertebral column. The thoracic, sacral and coccygeal regions of the vertebral column have a primary curvature that is retained throughout life. However, after birth, the cervical and lumbar regions of the vertebral column gradually develop an anterior convexity referred to as a secondary curvature. The secondary curvature of the cervical region develops as a result of sustained extension of the head and neck by the postvertebral muscles when the child first holds up its head. That in the lumbar region appears much later and is associated with the muscular support of the trunk provided by the powerful postvertebral muscles when the infant learns to sit, stand and walk. In the adult, therefore, the vertebral column viewed in profile possesses both primary and secondary curvatures. Learning points Primary curvature of the vertebral column is shaped like the letter C (concave anteriorly) and is characteristic of the fetus, and of the thoracic, sacral and coccygeal regions of the adult. Secondary curvatures of the vertebral column are found in the adult cervical and lumbar regions. Historically, the primary and secondary curvatures were also referred to as a kyphosis and a lordosis, respectively. Used correctly and in the clinical context, these terms should be confined to the description of abnormally exaggerated primary (kyphosis) and secondary (lordosis) curvatures. In addition to these curvatures seen in the sagittal plane, the normal vertebral column may also exhibit a small degree of curvature in the coronal plane. Where such a lateral curvature is at least 10 degrees, it is referred to as a scoliosis. Note, however, that a scoliosis is a three-dimensional abnormality that occurs as a result of a combination of lateral curvature and rotation of the vertebral column. Learning point Exaggerated primary, secondary, and lateral/rotational curvatures of the vertebral column are referred to, respectively, as a kyphosis, lordosis, and scoliosis. With the exception of the first two cervical vertebrae (atlas and axis, respectively) all the moveable vertebrae, whether from the cervical, thoracic or lumbar regions, share a more-or-less common morphological design (Figure 2). Typical vertebrae in these regions consist of a vertebral body located anterior to a vertebral arch that together surround a vertebral foramen. In life, the vertebral foramina €˜stacked€™ one on top of another form the vertebral (spinal) canal, which contains and protects the spinal cord and its meningeal coverings. Note that the spinal cord ends at the level of the first lumbar vertebra in the adult while the meningeal coverings continue into the sacrum. A transverse process projects laterally from the vertebral arch on either side, while the spinous process projects backwards in the posterior midline. The lamina is the part of the vertebral arch that lies between the bases of the transverse processes on either side and the spinous process. The vertebral arch is then attached to the vertebral body on either side by a pedicle. A superior articular process projects superiorly from the vertebral arch on either side of the midline, at approximately the junction of the lamina, pedicle and the transverse process. Below this, projecting inferiorly from the vertebral arch in line with the superior articular process, is the inferior articular process. Section snippets Cervical vertebrae There are seven cervical vertebrae, distinguished from other vertebrae by the presence of a perforation in each transverse process €“ the foramen transversarium (Figure 3). These transverse foramina (except that of C7 vertebra) transmit the ipsilateral vertebral artery, vertebral veins and accompanying sympathetic fibres. The vertebral artery enters its €˜vertebral€™ course at the foramen transversarium of C6 vertebra and then ascends successively through the foramina transversaria of the cervical Spinal motion segment The functional unit of the vertebral column comprises any two successive, moveable vertebrae, including the structures and articulations that unite them, and is referred to as the spinal motion segment (Figure 7). Thus, for example, C7 and T1 vertebrae together constitute a motion segment, as do T9 and T10 and T12 and L1 vertebrae, and so on. Three factors serve to confer physical stability, structural integrity and flexibility to any spinal motion segment, and thereby to the entire length of Intervertebral joints Three different types of joints link adjacent vertebra in order to form the complete articulated vertebral column in life. These are the secondary cartilaginous intervertebral discs between vertebral bodies, the synovial facet (zygapophyseal) joints between the superior and inferior articular processes, and fibrous joints between other portions of the vertebral arch. Blood supply of the vertebrae The vertebral column receives arterial blood segmentally via the vertebral, ascending cervical, posterior intercostal, subcostal, lumbar, iliolumbar, lateral sacral and medial sacral arteries. Each of these arteries contributes periosteal, spinal and equatorial branches to individual vertebrae. Venous blood from the vertebrae drains via spinal veins, which join to form the internal and external vertebral venous plexuses. The basivertebral veins drain each vertebral body into the internal Embryological development of the vertebrae Each vertebra ossifies from three primary centres, one for each side of the vertebral arch and one for the body. The body occasionally develops from two centres and suppression of one of these may result in formation of a hemivertebra with a consequent congenital scoliosis. Failure of the two arch centres to fuse posteriorly results in the condition of spina bifida, which occurs most commonly in the lumbar region. Usually this is not associated with any neurological abnormality (spina bifida References (0) Cited by (0) View full text © 2024 Published by Elsevier Ltd.
188851
https://www.thoughtco.com/how-barometers-measure-air-pressure-3444416
Skip to content Science, Tech, Math › Science › Weather & Climate › Understanding Your Forecast › How a Barometer Works and Helps Forecast Weather Science Weather & Climate Understanding Your Forecast Storms & Other Phenomena Chemistry Biology Physics Geology Astronomy By Jenny Worboys Updated on May 07, 2025 Key Takeaways A barometer measures air pressure, helping weather experts predict changes in weather and storms. The mercury barometer works by balancing mercury's weight against the air pressure above it. Aneroid barometers use a metal box that changes shape with air pressure to show readings. A barometer is a widely used weather instrument that measures atmospheric pressure (also known as air pressure or barometric pressure) -- the weight of the air in the atmosphere. It is one of the basic sensors included in weather stations. While an array of barometer types exist, two main types are used in meteorology: the mercury barometer and the aneroid barometer. How the Classic Mercury Barometer Works The classic mercury barometer is designed as a glass tube about 3 feet high with one end open and the other end sealed. The tube is filled with mercury. This glass tube sits upside down in a container, called the reservoir, which also contains mercury. The mercury level in the glass tube falls, creating a vacuum at the top. (The first barometer of this type was devised by Italian physicist and mathematician Evangelista Torricelli in 1643.) The barometer works by balancing the weight of mercury in the glass tube against the atmospheric pressure, much like a set of scales. Atmospheric pressure is basically the weight of air in the atmosphere above the reservoir, so the level of mercury continues to change until the weight of mercury in the glass tube is exactly equal to the weight of air above the reservoir. Once the two have stopped moving and are balanced, the pressure is recorded by "reading" the value at the mercury's height in the vertical column. If the weight of mercury is less than the atmospheric pressure, the mercury level in the glass tube rises (high pressure). In areas of high pressure, air is sinking toward the surface of the earth more quickly than it can flow out to surrounding areas. Since the number of air molecules above the surface increases, there are more molecules to exert a force on that surface. With an increased weight of air above the reservoir, the mercury level rises to a higher level. If the weight of mercury is more than the atmospheric pressure, the mercury level falls (low pressure).In areas of low pressure, air is rising away from the surface of the earth more quickly than it can be replaced by air flowing in from surrounding areas. Since the number of air molecules above the area decreases, there are fewer molecules to exert a force on that surface. With a reduced weight of air above the reservoir, the mercury level drops to a lower level. Mercury vs. Aneroid We've already explored how mercury barometers work. One "con" of using them, however, is that they're not the safest things (after all, mercury is a highly poisonous liquid metal). Aneroid barometers are more widely used as an alternative to "liquid" barometers. Invented in 1884 by French scientist Lucien Vidi, the aneroid barometer resembles a compass or clock. Here's how it works: Inside of an aneroid barometer is a small flexible metal box. Since this box has had the air pumped out of it, small changes in external air pressure cause its metal to expand and contract. The expansion and contraction movements drive mechanical levers inside which move a needle. As these movements drive the needle up or down around the barometer face dial, the pressure change is easily displayed. Aneroid barometers are the kinds most commonly used in homes and small aircraft. Cell Phone Barometers Whether or not you have a barometer in your home, office, boat, or plane, chances are your iPhone, Android, or another smartphone has a built-in digital barometer! Digital barometers work like an aneroid, except the mechanical parts are replaced with a simple pressure-sensing transducer. So, why is this weather-related sensor in your phone? Many manufacturers include it to improve elevation measurements provided by your phone's GPS services (since atmospheric pressure is directly related to elevation). If you happen to be a weather geek, you get the added benefit of being able to share and crowdsource air pressure data with a bunch of other smartphone users via your phone's always-on internet connection and weather apps. Millibars, Inches of Mercury, and Pascals Barometric pressure can be reported in any one of the below units of measure: Inches of Mercury (inHg) - Used mainly in the United States. Millibars (mb) - Used by meteorologists. Pascals (Pa) - The SI unit of pressure, used worldwide. Atmospheres (Atm) - Air pressure at sea level at a temperature of 59 °F (15 °C) When converting between them, use this formula: 29.92 inHg = 1.0 Atm = 101325 Pa = 1013.25 mb Edited by Tiffany Means Format mla apa chicago Your Citation Worboys, Jenny. "How a Barometer Works and Helps Forecast Weather." ThoughtCo, May. 7, 2025, thoughtco.com/how-barometers-measure-air-pressure-3444416. Worboys, Jenny. (2025, May 7). How a Barometer Works and Helps Forecast Weather. Retrieved from Worboys, Jenny. "How a Barometer Works and Helps Forecast Weather." ThoughtCo. (accessed August 21, 2025). A Guide to the Tools Used to Measure the Weather World 10 Famous Meteorologists In Meteorology, What Is a Low-Pressure Area? 7 Types of Weather in a High Pressure System How to "Speak" Weather Forecasting How to Read the Symbols and Colors on Weather Maps To the Right, To the Right (The Coriolis Effect) Why Winter Weather Is Difficult to Forecast How to Use Weather Maps to Make a Forecast Occluded Fronts: When Warm and Cold Fronts Meet Weather Satellites: Forecasting Earth's Weather From Space Using Clouds to Predict the Weather Calculating the Heat Index How Temperature Fluctuates Throughout the Day Clouds That Spell Severe Weather How to Simulate Weather Fronts By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts.
188852
https://www.youtube.com/watch?v=_-LvA1xlQLs
How To Find The X and Y Intercepts of a Line The Organic Chemistry Tutor 9860000 subscribers 11546 likes Description 963207 views Posted: 23 Jul 2017 This math video tutorial explains how to find the x and y intercepts of a line from a graph and from an equation in slope intercept form and in standard form. This video discusses the process of graphing linear equations using x and y intercepts. It contains plenty of examples and practice problems. It's useful for students taking algebra. Linear Equations - Free Formula Sheet: Linear Equations: Final Exam and Test Prep Videos: 360 comments Transcript: in this video we're going to talk about how to find the x and y intercepts from a graph and also from an equation and also how to graph a linear equation in standard form by using the x and y intercepts so let's go ahead and begin let's start with a graph so based on this graph what are the x and y-intercepts the x-intercept is basically the x-value where the graph touches the x-axis and it touches it at an x-value of four so that is the x-intercept you can write it as x equals four or you can write it as an ordered pair four comma zero now what is the y-intercept in this particular graph all you need to do is locate the y value at which the graph touches the y-axis and that y-value is negative three so i'm going to write it as 0 negative 3 as an ordered pair or you could say y is equal to negative 3. so that's how you can identify the x and y intercepts from a graph now let's look at another example so i'm going to draw another graph and from it determine the x and the y intercepts let's do that again so let's start with the x-intercept what is the x-intercept in this problem so notice that we have an x-value of 3 that's where the curve intercepts the x-axis so we have the point 3 comma zero that's the x intercept the y intercept occurs at a y value of two that's where the curve touches the y axis and so that point is zero comma two x is zero but y is 2 at that point so now you know how to find the x and y intercepts from a graph that's all you need to do let's say if you're given a linear equation y equals 2x plus 1. find the x and y intercepts and then graph the function now this linear equation is in slope intercept form that is it's in y equals mx plus b form m is the slope the slope is the number in front of x so the slope in this problem is two the y-intercept is b so we don't have to do anything to solve for the y-intercept so therefore we have the point zero comma one that's the y-intercept what we need to do is find the x-intercept in order to find the x-intercept replace a y with zero and find the value of x to find the y-intercept you can plug in zero into x so for example if you replace x with zero you're going to get y equals one which is always going to be whatever this number is if it's in slope intercept form but now let's go ahead and calculate the x-intercept so let's subtract both sides by one negative one is equal to two x and then let's divide by two so the x-intercept is negative one-half so has a point it's negative one over two comma zero now that we have the x and the y intercept we can go ahead and make a graph so the y intercept is at 0 1 and the x intercept is at negative 1 half comma 0 which is about there approximately and now we just got to make a line that connects the two points and that's how you can graph a linear equation given the x and the y-intercepts let's try another example go ahead and graph the equation one-half x minus 3 by finding the x and y-intercepts so automatically we can see the y-intercept it's negative three so we got the point zero negative three let's find the x-intercept so let's replace a y with zero and let's calculate the value of x so let's begin by adding three to both sides negative three plus three is zero so three is equal to one half x to get rid of the fraction let's multiply both sides by two one-half times two is a whole so on the right side all we have left over is x two times three is six so therefore the x intercept is six comma zero now that we have that we can go ahead and make the graph so let's plot the x intercept first so it's located at an x value of six but a y value of zero and now let's plot the y intercept which is at zero negative three and then simply connect the two points with a straight line and that's how you do it now sometimes you might be given a linear equation in standard form that is in a x plus b y equals c form go ahead and find the x and the y intercept to find the x intercept what you need to do is replace y with zero now three times zero is zero and zero basically is nothing so this portion simply disappears so therefore two x is equal to six next we need to divide both sides by two six divided by two is three and therefore this is the x-intercept so we have the ordered pair three comma zero now let's find the y-intercept so this time we're going to replace x with zero so two times zero is nothing so what we have left over is negative three y is equal to six so now let's divide both sides by negative three six divided by negative three is negative two so that's the y intercept so now we have the point zero negative two now we can make the graph so let's start with the x-intercept which is at an x-value of 3 on the x-axis and then the y-intercept which is at negative 2 on the y-axis and that's all you need to do to graph that particular equation let's work on one more example for the sake of practice go ahead and graph this equation so let's find the intercept first so we're going to replace y with zero so we're going to have 3x is equal to 12 and 12 divided by 3 is 4. so the x intercept is four comma zero next let's replace x with zero three times zero is zero zero plus four y is still four y and then twelve divided by four is three so the y intercept is zero comma three now let's graph it so the first point is that an x value of four and then the second point is at a y value of three and then simply connect the two points and that's it so now you know how to graph linear equations in standard form and even in slope intercept form there's many different methods that you can use but as you can see one method is simply finding the x and y intercepts but there's other methods out there so you can check out the rest of the playlists and find other videos on youtube that i have on graphic linear equations if you want to know more now i want to show you one of my algebra courses that might be useful to you if you ever need it so go to udemy.com now in the search box just type in algebra and it should come up so it's the one with the image with the black background so if you select that option and if you decide to go to course content you can see what's in this particular course so the first section basic arithmetic for those of you who want to focus on addition subtraction multiplication and division and it has a video quiz at the end it's a multiple choice video quiz you can pause it work on the problems and see the solutions it covers long division multiplying two large numbers and things like that the next tutorials on fractions add in subtracting fractions multiplying dividing fractions converting fractions into decimals and so forth so you can also take a look at that next solve the linear equations which we covered and just more examples if you need more help with that the next topic order of operations which is also useful uh graphing linear equations you need to know how to calculate the slope you need to be familiar with the slope intercept form standard form and just how to tell if lines are parallel perpendicular and so forth and there's a quiz that goes with that as well the next topic is on inequalities and absolute value expressions which are also a cena typical algebra course and then we have polynomials and that's a long section and then factoring you just that's another topic you need to master and then system of equations you can solve it by elimination substitution there's also word problems as well sometimes you got to solve equations with three variables x y and z so that could be helpful next quadratic equations how to use the quadratic formula how to graph them how to convert between standard and vertex form and then you have rational expressions and radical expressions solving radical equations simplifying it things like that and every section has a quiz so you can always review what you've learned if you have a test the next day so here we have complex imaginary numbers you need to know how to simplify those exponential functions logs i have a lot of videos on logs and then this is just functions in general vertical line tests line tests how to tell for functions even or odd and then conic sections graphene circles hyperbolas ellipses parabolas and things like that there's two video quizzes because it's actually a long section and finally arithmetic and geometric sequences and series so that's my algebra course if you want to take a look at it and uh let me know what you think you
188853
https://dictionary.cambridge.org/us/grammar/british-grammar/do
Cambridge Dictionary +Plus My profile +Plus help Log out {{userName}} Cambridge Dictionary +Plus My profile +Plus help Log out Log in / Sign up English (US) Do is an irregular verb. Its three forms are do, did, done. The present simple third person singular is does: Will you do a job for me? I did some shopping this morning. Have you done your essay yet? He usually does his homework in front of the television. We use do as a main verb and an auxiliary verb. We can also use it as a substitute verb. Do as a main verb Do as a main verb has a number of meanings. We use do to talk about actions in general, when we do not specify exactly what the action is: What have you been doing today, anything interesting? There is nothing we can do except wait and see what happens. Can I do anything to help? We use do as a main verb to talk about achieving or completing things: I’ve done the washing up. Oh, thank you. We did 80 miles on the first day of our cycling holiday. She does the crossword in the newspaper every day. We use do with nouns such as homework, job, task, work: She has a lot of homework to do tonight. I’m going to do some work in the garden this weekend. If you want to know what someone’s job or profession is, you can use the main verb do in a question: A: What does Jackie’s brother do? B: He’s an electrician. Not: What does Jackie’s brother? We use do to talk about studying subjects: A: What did you do at university? B: I did economics. All children have to do English in primary school. Take part in activities We use do as a main verb to talk about taking part in activities: I did a lot of hiking and mountain-climbing when I was younger. She did a trip down the Amazon when she was in Brazil. Produce or create Spoken English: We often use do with nouns such as copy, design, drawing, painting, especially in informal speech: I like that photo of you and me. Can you do me a copy? Who did the design for the website? She did a lovely painting of the lake where we stayed last summer. See also: Do or make? Clean or make tidy We use do as a main verb to talk about cleaning things or making them tidy: The cleaner was doing my room when I came back. I’ll just do my hair and then I’ll be ready. Be enough or acceptable We use do as a main verb with will or won’t to talk about things being enough or acceptable: A: What size bag do you need? B: A small one will do. (a small one is enough/acceptable) See also: Do or make? Do as an auxiliary verb Do is one of three auxiliary verbs in English: be, do, have. We use do to make negatives (do + not), to make question forms, and to make the verb more emphatic. | meaning | example | | negative | I didn’t see you at the concert the other night. | | question form | Do they open at nine o’clock on weekdays? | | emphatic | He does look smart in his new suit! | Question (?) form To make the question form of most main verbs, we use do, does (present simple) and did (past simple) followed by the subject and the main verb: Do you play football? Doesn’t he phone you now and then? Did your mother come from the same place as your father? Negative (−) form The negative of the present simple and past simple of all main verbs (except for be and some uses of have as main verbs) is made with auxiliary do + not, which is shortened to don’t (do not), doesn’t (does not) and didn’t (did not). We use the short forms in everyday informal language, and the full forms in more formal situations: I don’t want to wait for a bus. Let’s get a taxi. Jack doesn’t live in the town centre. He’s out in the suburbs. Didn’t you get my email? I sent it at about four o’clock. The Prime Minister does not take personal phone calls from members of the public. (more formal) Did the parents not realise that something serious had happened to their child? (more formal) See also: Be as a main verb Have as a main verb Emphatic forms We use do, does (present simple) or did (past simple) to give extra force to the main verb. We use the infinitive of the main verb without to, and stress do/does/did when speaking. Compare | neutral | emphatic | | I like your new jacket. | I do like your new jacket! | | She looks so tired. | She does look so tired! | | I didn’t recognise your dad, but I recognised your mum. | I didn’t recognise your dad, but I did recognise your mum. | We also use emphatic do with imperatives. Do come and have dinner with us some time. Do stop talking, Harry! You’re boring everybody! Question tags We use auxiliary do to form question tags for clauses which do not have a modal verb, a verb in the perfect with have or clauses with be. The tag uses the same person and tense as the subject of the main verb. The tag may be affirmative or negative, depending on the type of tag: You work with Peter, don’t you? (affirmative main verb, negative tag) She plays the piano, doesn’t she? Little children don’t usually like spicy food, do they? (negative main verb, affirmative tag) They didn’t stay very long, did they? You live near Harkness, do you? (affirmative verb, affirmative tag) They arrived late, did they? See also: Tags Ellipsis Do as an auxiliary verb: typical errors We don’t use auxiliary do to make questions or negatives for clauses with modal verbs: Will you be here in time for lunch? Not: Do you will be here… I can’t swim very well. Not: I don’t can swim… We use auxiliary do, not auxiliary be, for questions with main verbs in the present simple: Do you live in an apartment? Not: Are you live in… We use does, not do, for the third person in the present tense: Does your sister have brown eyes too? Not: Do your sister have… See also: Do as a substitute verb Ellipsis Do as a substitute verb We often use do instead of repeating all the words in a clause. Do substitutes for the words we don’t repeat: A: We went to the concert in the park this year. B: Yes, we did too. (Yes, we went to the concert in the park too.) We don’t use do alone if the substitute verb is in the to-infinitive form. In those cases, we omit the verb but keep to, or we use do so, do it or do that: It’s not often I write letters to newspapers, but that day I desperately felt the need to. or … the need to do so/it/that. (I desperately felt the need to write letters to newspapers.) Not: … the need to do. Do so, do it, do that We sometimes add so, it or that after the substitute do. Do so, do it and do that are sometimes used differently, but they are often interchangeable: He said he was going to move to New Zealand and, to everyone’s surprise, he did so/did it/did that. Do so We use do so mostly to refer to actions where the subject and verb are the same as the ones we have mentioned. Do so is generally more formal than do it and do that: I wanted them to leave, and politely asked them to do so, but they wouldn’t go, so I called the police. (I wanted them to leave and I politely asked them to leave.) Warning: Do so is more formal than do on its own: A: Do you mind if I open the present now? B: Yes, please do so. (Do so substitutes for open the present now) We often use do so when we make a general reference to a series of actions or events: The birds make their nests on the north side of the island in little holes in the rocks. The reason why they do so is because the south side of the island is exposed to extreme winds. Do it We use do it when we refer to an action or an event involving a verb and an object, especially when the subject is different from the one already mentioned: A: He accidentally deleted some emails on his computer. B: I do it all the time. (I delete files all the time.) Do that Do that is more emphatic and we use it for deliberate actions: A: Would you ever give a complete stranger your phone number? B: No. I would never do that. (I would never give a complete stranger my phone number.) We often use do that in situations where we are contrasting things: A: Would you like to have a few nights in a motel? B: No, we’d prefer not to do that. We’d rather have a nice hotel. (We’d prefer not to have a few nights in a motel.) A: I’ve decided to wait a year before starting college. I want to travel a bit and see the world. B: I really think you should do that rather than starting college. You’re still so young. College will still be an option this time next year. We can use a modal or an auxiliary verb + do to substitute for a main verb and what comes after it: A: I feel terrible. B: You should go to the doctor. A: I should do, I know, but I have so much work to finish. A: Has Martin met Paul before? B: He could have done at the sales meeting last year, but I’m not sure. Test your vocabulary with our fun image quizzes Try a quiz now Word of the Day Californian UK /ˌkæl.ɪ.ˈfɔː.ni.ən/ US /ˌkæl.əˈfɔːr.njən/ someone from the US state of California About this Blog Decisive victory or narrow defeat: talking about competitions (2) Read More New Words blobbery More new words has been added to list To top Contents Adjectives and adverbs Easily confused words Nouns, pronouns and determiners Prepositions and particles Using English Verbs Words, sentences and clauses Cambridge Dictionary +Plus My profile +Plus help Log out English (US) Change English (UK) English (US) Español Português 中文 (简体) 正體中文 (繁體) Dansk Deutsch Français Italiano Nederlands Norsk Polski Русский Türkçe Tiếng Việt Svenska Українська 日本語 한국어 ગુજરાતી தமிழ் తెలుగు বাঙ্গালি मराठी हिंदी Follow us Choose a dictionary Recent and Recommended English Grammar English–Spanish Spanish–English Definitions Clear explanations of natural written and spoken English English Learner’s Dictionary Essential British English Essential American English Grammar and thesaurus Usage explanations of natural written and spoken English Grammar Thesaurus Pronunciation British and American pronunciations with audio English Pronunciation Translation Click on the arrows to change the translation direction. Bilingual Dictionaries English–Chinese (Simplified) Chinese (Simplified)–English English–Chinese (Traditional) Chinese (Traditional)–English English–Dutch Dutch–English English–French French–English English–German German–English English–Indonesian Indonesian–English English–Italian Italian–English English–Japanese Japanese–English English–Norwegian Norwegian–English English–Polish Polish–English English–Portuguese Portuguese–English English–Spanish Spanish–English English–Swedish Swedish–English Semi-bilingual Dictionaries English–Arabic English–Bengali English–Catalan English–Czech English–Danish English–Gujarati English–Hindi English–Korean English–Malay English–Marathi English–Russian English–Tamil English–Telugu English–Thai English–Turkish English–Ukrainian English–Urdu English–Vietnamese Dictionary +Plus Word Lists Contents Adjectives and adverbs Adjectives Adjectives Adjectives: forms Adjectives: order Adjective phrases Adjective phrases: functions Adjective phrases: position Adjectives and adjective phrases: typical errors Comparative and superlative adjectives Comparison: adjectives (bigger, biggest, more interesting) Comparison: clauses (bigger than we had imagined) Comparison: comparisons of equality (as tall as his father) As … as Adverbs Adverbs Adverb phrases Adverbs and adverb phrases: position Adverbs and adverb phrases: typical errors Adverbs: forms Adverbs: functions Adverbs: types Comparison: adverbs (worse, more easily) Degree adverbs Fairly Intensifiers (very, at all) Largely Much, a lot, lots, a good deal: adverbs Pretty Quite Rather Really Scarcely Very Time adverbs About Ago Already Always Early Ever Hardly ever, rarely, scarcely, seldom Next No longer, not any longer No more, not any more Now Often Once Soon Still Then Usually Eventually Adverbs as discourse markers (anyway, finally) Adverbs as short responses (definitely, certainly) Using adjectives and adverbs Afraid Alike Hard Long Only Same, similar, identical Likely and unlikely As well (as) Even Hardly Hopefully Surely Too Ultimately Easily confused words Above or over? Across, over or through? Advice or advise? Affect or effect? All or every? All or whole? Allow, permit or let? Almost or nearly? Alone, lonely, or lonesome? Along or alongside? Already, still or yet? Also, as well or too? Alternate(ly), alternative(ly) Although or though? Altogether or all together? Amount of, number of or quantity of? Any more or anymore? Anyone, anybody or anything? Apart from or except for? Arise or rise? Around or round? Arouse or rouse? As or like? As, because or since? As, when or while? Been or gone? Begin or start? Beside or besides? Between or among? Born or borne? Bring, take and fetch Can, could or may? Classic or classical? Come or go? Consider or regard? Consist, comprise or compose? Content or contents? Different from, different to or different than? Do or make? Down, downwards or downward? During or for? Each or every? East or eastern; north or northern? Economic or economical? Efficient or effective? Elder, eldest or older, oldest? End or finish? Especially or specially? Except or except for? Expect, hope or wait? Experience or experiment? Fall or fall down? Far or a long way? Farther, farthest or further, furthest? Fast, quick or quickly? Fell or felt? Female or feminine; male or masculine? Finally, at last, lastly or in the end? First, firstly or at first? Fit or suit? Following or the following? For or since? Forget or leave? Full or filled? Fun or funny? Get or go? Grateful or thankful? Hear or listen (to)? High or tall? Historic or historical? House or home? How is …? or What is … like? If or when? If or whether? Ill or sick? Imply or infer? In the way or on the way? It’s or its? Late or lately? Lay or lie? Lend or borrow? Less or fewer? Look at, see or watch? Low or short? Man, mankind or people? Maybe or may be? Maybe or perhaps? Nearest or next? Never or not … ever? Nice or sympathetic? No doubt or without doubt? No or not? Nowadays, these days or today? Open or opened? Opportunity or possibility? Opposite or in front of? Other, others, the other or another? Out or out of? Permit or permission? Person, persons or people? Pick or pick up? Play or game? Politics, political, politician or policy? Price or prize? Principal or principle? Quiet or quite? Raise or rise? Remember or remind? Right or rightly? Rob or steal? Say or tell? So that or in order that? Sometimes or sometime? Sound or noise? Speak or talk? Such or so? There, their or they’re? Towards or toward? Wait or wait for? Wake, wake up or awaken? Worth or worthwhile? Nouns, pronouns and determiners Determiners A/an and the Determiners (the, my, some, this) Determiners and types of noun Determiners: position and order Determiners: typical errors Determiners used as pronouns Every Possession (John’s car, a friend of mine) Such This, that, these, those Whole Nouns Nouns Nouns: form Nouns and prepositions Nouns: compound nouns Nouns: countable and uncountable Nouns: forming nouns from other words Nouns: singular and plural Uncountable nouns Accommodation Equipment Furniture Information Luck and lucky News Progress Weather Noun phrases Noun phrases: dependent words Noun phrases: order Noun phrases: uses Noun phrases: noun phrases and verbs Noun phrases: two noun phrases together Pronouns Pronouns Each other, one another Everyone, everybody, everything, everywhere It Gender No one, nobody, nothing, nowhere One One and one’s Pronouns: personal (I, me, you, him, it, they, etc.) Pronouns: possessive (my, mine, your, yours, etc.) Pronouns: reflexive (myself, themselves, etc.) Pronouns: indefinite (-body, -one, -thing, -where) Pronouns: one, you, we, they Relative pronouns Questions: interrogative pronouns (what, who) Someone, somebody, something, somewhere That Quantifiers A bit All Any Both Either Enough Least, the least, at least Less Little, a little, few, a few Lots, a lot, plenty Many More Most, the most, mostly Much, many, a lot of, lots of: quantifiers No, none and none of Plenty Some Some and any Question words How What When Where Which Who, whom Whose Why Using nouns Piece words and group words Comparison: nouns (more money, the most points) Nouns and gender Reported speech: reporting nouns Age Half Holiday and holidays Mind Opinion Promise Reason Sort, type and kind Thing and stuff View Way Work (noun) Prepositions and particles Prepositions Prepositional phrases Above After, afterwards Against Among and amongst As At At, in and to (movement) At, on and in (place) At, on and in (time) Below Beneath Beyond By During For For + -ing From In front of In spite of and despite In, into Near and near to Of On, onto Over To Under Until With Within Without Using English Collocation Functions Commands and instructions Commentaries Invitations Offers Requests Greetings and farewells: hello, goodbye, Happy New Year Suggestions Telephoning Warnings Numbers Dates Measurements Number Time People and places Geographical places Names and titles: addressing people Nationalities, languages, countries and regions Place names Place and movement Abroad Away and away from Back Inside Nearby Outside Up Politeness Reported speech Reported speech Reported speech: direct speech Reported speech: indirect speech Sexist language Spoken English Pronunciation Intonation Politeness Interjections (ouch, hooray) Tags Chunks Ellipsis Headers and tails Hyperbole Vague expressions Downtoners Hedges (just) Substitution All right and alright Please and thank you Here and there Just Kind of and sort of Oh So and not with expect, hope, think, etc. So Yes Anyway Discourse markers (so, right, okay) In fact Okay, OK Well You know You see Types of English British and American English Dialect Double negatives and usage Formal and informal language Newspaper headlines Register Slang Standard and non-standard language Swearing and taboo expressions Useful phrases According to Actual and actually Approximations (around four o’clock) At all Else Hear that, see that However, whatever, whichever, whenever, wherever, whoever It’s time May as well and might as well More or less Of course Point of view Writing Apostrophe (’) Apposition Contractions Contrasts Detached impersonal style Internet discourse and text messages It, this and that in paragraphs Paragraphs Punctuation Speech into writing Spelling Such as Verbs Tenses and time Past Past simple (I worked) Past continuous (I was working) Past continuous or past simple? Past simple or present perfect? Used to Past perfect simple (I had worked) Past perfect continuous (I had been working) Past perfect simple or past perfect continuous? Past perfect simple or past simple? Past verb forms referring to the present Past: typical errors Present Present continuous (I am working) Present perfect continuous (I have been working) Present perfect simple (I have worked) Present perfect simple or present perfect continuous? Present perfect: typical errors Present simple (I work) Present simple or present continuous? Present: typical errors Present verb forms referring to the past Future Future: will and shall Future: be going to (I am going to work) Future: other expressions to talk about the future Future continuous (I will be working) Future in the past Future perfect continuous (I will have been working here ten years) Future perfect simple (I will have worked eight hours) Future: present continuous to talk about the future (I’m working tomorrow) Future: present simple to talk about the future (I work tomorrow) Future: typical errors Going to Verb forms Finite and non-finite verbs Imperative clauses (Be quiet!) Infinitives with and without to Infinitive: active or passive? Perfect infinitive with to (to have worked) Verbs: basic forms Verbs: formation Verb patterns Hate, like, love and prefer Hear, see, etc. + object + infinitive or -ing Help somebody (to) do Look forward to Stop + -ing form or to-infinitive Verb patterns: verb + infinitive or verb + -ing? Verb patterns: verb + that-clause Verb patterns: with and without objects Would like Would rather, would sooner Phrasal verbs and multi-word verbs Passive voice Get passive Have something done Passive: forms Passives with and without an agent Passive: uses Passive: other forms Passive: typical errors Modal verbs and modality Can Could Could, may and might Dare Had better May Might Modality: forms Modality: meanings and uses Modality: tense Modality: other verbs Modality: other modal words and expressions Must Need Ought to Shall Should Will Would Conditionals and wishes Conditionals Conditionals: if Conditionals: other expressions (unless, should, as long as) Conditionals: typical errors If only In case (of) Suppose, supposing and what if Wish Using verbs Verbs: types Verb phrases Verbs and verb phrases: typical errors Appear Ask and ask for Be Be expressions (be able to, be due to) Come Do Enable Enjoy Explain Get Go Happen Have Have got and have Hope Know Let, let’s Like Look Made from, made of, made out of, made with Make Marry and divorce Matter Mean Miss Prefer Put See Seem Suggest Take Think Want Table of irregular verbs Words, sentences and clauses Word classes and phrase classes Word formation Word formation Prefixes Suffixes Compounds Abbreviations, initials and acronyms -ish and -y Diminutives (-let, -y and mini-) Hyphens Word order and focus Word order and focus Word order: structures Cleft sentences (It was in June we got married.) Fronting Inversion No sooner Not only … but also Conjunctions and linking words And As if and as though As long as and so long as Because, because of and cos, cos of Before But Conjunctions Conjunctions: adding Conjunctions: causes, reasons, results and purpose Conjunctions: contrasting Conjunctions: time Either … or… If In order to Or Since Unless Whereas Whether While and whilst Yet Clauses and sentences Adjuncts Clauses Clauses: finite and non-finite Clause types Complements Dummy subjects Exclamations Heads Objects Sentences Subjects Subject complements Subject–verb agreement Relative clauses Relative clauses Relative clauses referring to a whole sentence Relative clauses: defining and non-defining Relative clauses: typical errors Negation Negation Neither, neither … nor and not … either Not Neither, neither … nor and not … either Not Forming negative statements, questions and imperatives Negation: two negatives Negative clauses with any, anybody, anyone, anything, anywhere Negation in non-finite clauses Negative prefixes and suffixes Negative adverbs: hardly, seldom, etc. Negation: emphasising Negation of think, believe, suppose, hope Questions Questions: alternative questions (Is it black or grey?) Questions: statement questions (you’re over 18?) Questions: two-step questions Questions: typical errors Questions: wh-questions Questions: yes-no questions (Are you feeling cold?) Questions: follow-up questions Questions: echo and checking questions Questions: short forms My word lists To add ${headword} to a word list please sign up or log in. Sign up or Log in My word lists Add ${headword} to one of your lists below, or create a new one. 5 && !stateSidebarWordList.expended) ? 195 : (stateSidebarWordListItems.length 39)" [src]="stateSidebarWordListItems" class="i-amphtml-element i-amphtml-layout-fixed-height i-amphtml-layout-size-defined i-amphtml-built" i-amphtml-layout="fixed-height" style="height: 0px;"> {{name}} 5 && !stateSidebarWordList.expended) ? 'hao hp lmt-25' : 'hdn'"> Go to your word lists Tell us about this example sentence: By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. Privacy and Cookies Policy
188854
https://math.stackexchange.com/questions/463978/minimize-the-sum-of-distances-between-two-point-and-a-circle
Skip to main content Minimize the sum of distances between two point and a circle Ask Question Asked Modified 12 years ago Viewed 2k times This question shows research effort; it is useful and clear 5 Save this question. Show activity on this post. Let's A,B and O be random point in a plane, such that they are not colinear. Let's c be a circle centered on O, such that points A and B are outside of it. Find a point X that lies on the circle c and the sum AX+BX is minimal. First I tried to minimize the function: f(x,y)=(A.x−x)2+(A.y−y)2−−−−−−−−−−−−−−−−−−−−√+(B.x−x)2+(B.y−y)2−−−−−−−−−−−−−−−−−−−−√ with constraint g(x,y)=x2+y2−r2=0 By moving O to (0,0) and do the same with A,B so the relative distance will not change. But these works when we know the coordinates of A,B and the radius of the circle. Yet this isn't the right way because the problem is to construct that point. Later I tried to approach using geometry. The smallest distance from the point A to the circle is the intersection of the circle c and the ray OA, this is the same for B. Then we construct circles centered on A and B that are tangent to c. If the two circle intersect we'll get a common point for the both A and B and the distance will be minimal. Using computer I find the point X should be somewhere around the intercetion of the circle c and the ray from O to the new point. Sometimes the optimal point differs just a bit and also this method fails when the two circles do not intersect each other geometry circles plane-curves Share CC BY-SA 3.0 Follow this question to receive notifications edited Aug 11, 2013 at 10:49 user18119 asked Aug 10, 2013 at 0:46 Stefan4024Stefan4024 36.4k99 gold badges5454 silver badges103103 bronze badges Add a comment | 1 Answer 1 Reset to default This answer is useful 2 Save this answer. Show activity on this post. Consider a family of nested ellipses with foci at points A and B. Each such ellipse represents a set of locations on the plane such that the sum of distances to the foci is constant. As we increase the size of the ellipse it will eventually touch the circle; this is the solution we are looking for. It makes sense to choose the coordinates such that the two foci are at points (a,0) and (-a,0) where a=|AB|/2. Let the family of nested ellipses be represented by parameter h which is the length of its shorter semi-axis. Then the equation of such an ellipse is x2l2+y2h2=1 Here l=l(h) is the longer semi-axis, which is found from the condition 2a2+h2−−−−−−√=(l−a)+(l+a), so l=h2+a2−−−−−−√ and the ellipse equation is x2h2+a2+y2h2=1 The circle is of course represented by (x−xo)2+(y−yo)2=c2 Now we have a system of two algebraic equations and the problem boils down to determining when they have a single solution which is the touch point. Note that there are two cases of touching - touching on the near side of the circle, and touching on the far side when the circle is inside of the ellipse. One can eliminate one of the variables from the equations which will convert the problem to a quartic equation in one variable; so a closed form solution can be found by a known formula. A certain value of parameter h will coalesce the two real roots into one, that's the condition of touching. From the above considerations one can re-formulate the problem in purely geometrical terms: The point on the circle X minimizing the sum of distances |AX|+|BX| satisfies the condition that the line (OX) is the bisector of angle AXB. This formulation leads to an alternative approach to solving the problem which can be pursued algebraically, using trigonometric relations; but perhaps there is some clever geometric shortcut. An approximate solution can be found using geometric optics: Assume the circle is a convex mirror and find an image A' of point A. The image will be virtual (inside the circle), and the line A'B will intersect the circle at point X satisfying the reflection conditions and minimizing the sum of distances |AX|+|BX|. However this geometric optic solution will work only as long as the circle can be approximated by a parabola (paraxial approximation) which is satisfied in the limit of ∠AOB ≪ 1. P.S. It looks like a general geometric optics solution (without approximations) is worked out in www.geometrictools.com/Documentation/SphereReflections.pdf‎ and it leads to a quartic equation. Share CC BY-SA 3.0 Follow this answer to receive notifications edited Aug 11, 2013 at 8:10 answered Aug 10, 2013 at 4:23 Maxim UmanskyMaxim Umansky 46911 gold badge33 silver badges1515 bronze badges 5 Good idea, but I don't need to find the point of tangency, I need to construct it. How can I construct ellipse that is tangent to circle c and has foci at point A and B? – Stefan4024 Commented Aug 10, 2013 at 10:46 Also this calulcation doesn't work when segment AB goes through the circle c then the the optimal solution is when one arc of the ellipse touches the circle and the other goes through it. (The bigger semiaxe divides it in these two arcs) and I doubt that using this calculation we'll get that ellipse. – Stefan4024 Commented Aug 10, 2013 at 10:54 Algebraic operations can be carried out geometrically, with a compass and straight edge. This would be a way to find the roots of the quartic equation. – Maxim Umansky Commented Aug 10, 2013 at 14:44 I found out that the ray OX should bisects the angle AXB, but how can I construct that? – Stefan4024 Commented Aug 11, 2013 at 0:09 Yes, I also thought about this, and I wrote a system of trigonometric relations expressing this condition. But as far as I can see this does not lead to anything short and simple. Maybe there is some theorem about bisectors that can provide a shortcut, not sure. – Maxim Umansky Commented Aug 11, 2013 at 6:29 Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions geometry circles plane-curves See similar questions with these tags. Featured on Meta Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Community help needed to clean up goo.gl links (by August 25) Linked 6 Reflection on a circle 2 How to determine the minimum value of the weighted sum of the distances from two points to a point on a circumference 1 Find M∈(E) so that the MA+MB is minimum. Related 1 Why does the following calculation work? (distance from a point to a line) 0 Aligning coordinates on 2 circles 0 How to get the center point of a circle 0 collision point of circle and line 1 extend a line in both way 1 Minimize the sum of distances between a circle and two fixed points 0 Distance between a line segment and a point equation 1 3×3 determinant into cross product 0 How to find a, b so that left of the linear plane is checked in the anti-clokwise way? Hot Network Questions Lam 1:10 "..she has seen the nations enter her sanctuary,The ones whom You commanded That they should not enter into....." & intermarrying foreigners Did the success of "Star Wars" contribute to the decision to make "Strangers" starring Don Henderson? Buck LED driver inductor placement Why does Wittgenstein use long, step-by-step chains of reasoning in his works? to suggest that they were married In "Computer Networks:A Top Down Approach" - can multiple applications using the same protocol use its designated port? Is Berk (1966)'s main theorem standard in Statistics/Probability? Is there a name for it? Make a square tikzcd diagram Half-Life 1 launches on the wrong monitor and outside of the screen Resonance in hexa-2,4-dienyl cation Two Card Betting Game: Asymmetrical Problem Can a nozzle-less engine be made efficient by clustering? Polynomial-Time Algorithms for Canonical Form of Ternary Matrices under Row/Column Permutations and Column Negations Are there good reasons for keeping two heating oil tanks in my home? Detecting dependence of one random variable on the other through a function A Rubik's Cube game in Python Why is there more than one model of electric guitar or bass Why do word beginnings with X take a /z/ sound in English? Why do we introduce the continuous functional calculus for self-adjoint operators? Why is muscle cramp called a “charley horse”? Could you charge a battery using with a long radio aerial? Question regarding Mishnah Berachos 1:2 How did Louis XIV raise an army half the size of Aurangzeb's with only a tenth of the revenue? Modifying structures based on imaginary frequencies in Avogadro2 Question feed By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
188855
https://pubmed.ncbi.nlm.nih.gov/29266072/
ACOG Committee Opinion No. 728 Summary: Müllerian Agenesis: Diagnosis, Management, And Treatment - PubMed Clipboard, Search History, and several other advanced features are temporarily unavailable. Skip to main page content An official website of the United States government Here's how you know The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. Log inShow account info Close Account Logged in as: username Dashboard Publications Account settings Log out Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation Search: Search AdvancedClipboard User Guide Save Email Send to Clipboard My Bibliography Collections Citation manager Display options Display options Format Save citation to file Format: Create file Cancel Email citation Email address has not been verified. Go to My NCBI account settings to confirm your email and then refresh this page. To: Subject: Body: Format: [x] MeSH and other data Send email Cancel Add to Collections Create a new collection Add to an existing collection Name your collection: Name must be less than 100 characters Choose a collection: Unable to load your collection due to an error Please try again Add Cancel Add to My Bibliography My Bibliography Unable to load your delegates due to an error Please try again Add Cancel Your saved search Name of saved search: Search terms: Test search terms Would you like email updates of new search results? Saved Search Alert Radio Buttons Yes No Email: (change) Frequency: Which day? Which day? Report format: Send at most: [x] Send even when there aren't any new results Optional text in email: Save Cancel Create a file for external citation management software Create file Cancel Your RSS Feed Name of RSS Feed: Number of items displayed: Create RSS Cancel RSS Link Copy Actions Cite Collections Add to Collections Create a new collection Add to an existing collection Name your collection: Name must be less than 100 characters Choose a collection: Unable to load your collection due to an error Please try again Add Cancel Permalink Permalink Copy Display options Display options Format Page navigation Title & authors Abstract Similar articles Cited by Publication types MeSH terms Supplementary concepts Related information LinkOut - more resources Review Obstet Gynecol Actions Search in PubMed Search in NLM Catalog Add to Search . 2018 Jan;131(1):196-197. doi: 10.1097/AOG.0000000000002452. ACOG Committee Opinion No. 728 Summary: Müllerian Agenesis: Diagnosis, Management, And Treatment No authors listed PMID: 29266072 DOI: 10.1097/AOG.0000000000002452 Item in Clipboard Review ACOG Committee Opinion No. 728 Summary: Müllerian Agenesis: Diagnosis, Management, And Treatment No authors listed. Obstet Gynecol.2018 Jan. Show details Display options Display options Format Obstet Gynecol Actions Search in PubMed Search in NLM Catalog Add to Search . 2018 Jan;131(1):196-197. doi: 10.1097/AOG.0000000000002452. PMID: 29266072 DOI: 10.1097/AOG.0000000000002452 Item in Clipboard Cite Display options Display options Format Abstract Müllerian agenesis, also referred to as müllerian aplasia, Mayer-Rokitansky-Küster-Hauser syndrome, or vaginal agenesis, has an incidence of 1 per 4,500-5,000 females. Müllerian agenesis is cau0073ed by embryologic underdevelopment of the müllerian duct, with resultant agenesis or atresia of the vagina, uterus, or both. Patients with müllerian agenesis usually are identified when they are evaluated for primary amenorrhea with otherwise typical growth and pubertal development. The most important steps in the effective management of müllerian agenesis are correct diagnosis of the underlying condition, evaluation for associated congenital anomalies, and psychosocial counseling in addition to treatment or intervention to address the functional effects of genital anomalies. The psychologic effect of the diagnosis of müllerian agenesis should not be underestimated. All patients with müllerian agenesis should be offered counseling and encouraged to connect with peer support groups. Future options for having children should be addressed with patients: options include adoption and gestational surrogacy. Assisted reproductive techniques with use of a gestational carrier (surrogate) have been shown to be successful for women with müllerian agenesis. Nonsurgical vaginal elongation by dilation should be the first-line approach. When well-counseled and emotionally prepared, almost all patients (90-96%) will be able to achieve anatomic and functional success by primary vaginal dilation. In cases in which surgical intervention is required, referrals to centers with expertise in this area should be considered because few surgeons have extensive experience in construction of the neovagina and surgery by a trained surgeon offers the best opportunity for a successful result. PubMed Disclaimer Similar articles ACOG Committee Opinion No. 728: Müllerian Agenesis: Diagnosis, Management, And Treatment.Committee on Adolescent Health Care.Committee on Adolescent Health Care.Obstet Gynecol. 2018 Jan;131(1):e35-e42. doi: 10.1097/AOG.0000000000002458.Obstet Gynecol. 2018.PMID: 29266078 Review. Committee opinion: no. 562: müllerian agenesis: diagnosis, management, and treatment.[No authors listed][No authors listed]Obstet Gynecol. 2013 May;121(5):1134-1137. doi: 10.1097/01.AOG.0000429659.93470.ed.Obstet Gynecol. 2013.PMID: 23635766 Using the Wharton-Sheares-George method to create a neovagina in patients with Mayer-Rokitansky-Küster-Hauser syndrome: a step-by-step video tutorial.Kuessel L, Wenzl R, Marschalek ML, Slavka G, Doerfler D, Husslein H.Kuessel L, et al.Fertil Steril. 2016 Dec;106(7):e20-e21. doi: 10.1016/j.fertnstert.2016.08.030. Epub 2016 Sep 24.Fertil Steril. 2016.PMID: 27678038 Review. Mayer-Rokitansky-Kuster-Hauser syndrome: complications, diagnosis and possible treatment options: a review.Bombard DS 2nd, Mousa SA.Bombard DS 2nd, et al.Gynecol Endocrinol. 2014 Sep;30(9):618-23. doi: 10.3109/09513590.2014.927855. Epub 2014 Jun 20.Gynecol Endocrinol. 2014.PMID: 24948340 Review. Hymenal Anomalies Interfering with Dilation in Women with Mullerian Agenesis: A Case Series.Fortin C, Pennesi C, Huguelet PS, Quint EH, Scott S, Alaniz VI.Fortin C, et al.J Pediatr Adolesc Gynecol. 2023 Feb;36(1):86-88. doi: 10.1016/j.jpag.2022.07.012. Epub 2022 Jul 30.J Pediatr Adolesc Gynecol. 2023.PMID: 35914648 See all similar articles Cited by Treatment guidelines for persistent cloaca, cloacal exstrophy, and Mayer-Rokitansky-Küster-Häuser syndrome for the appropriate transitional care of patients.Kubota M, Osuga Y, Kato K, Ishikura K, Kaneko K, Akazawa K, Yonekura T, Tazuke Y, Ieiri S, Fujino A, Ueno S, Hayashi Y, Yoshino K, Yanai T, Iwai J, Yamaguchi T, Amae S, Yamazaki Y, Sugita Y, Kohno M, Kanamori Y, Bitoh Y, Shinkai M, Ohno Y, Kinoshita Y.Kubota M, et al.Surg Today. 2019 Dec;49(12):985-1002. doi: 10.1007/s00595-019-01810-z. Epub 2019 Apr 22.Surg Today. 2019.PMID: 31011869 Review. Histologic Analysis of 'Distraction Vaginogenesis' in a Rat Model.Meyer H, Trosclair L, Clayton SD, O'Quin C, Crochet C, Colvin JC, Welch V, Alhaque A, Solitro G, Shah-Bruce M, Alexander JS, Sorrells DL.Meyer H, et al.Pathophysiology. 2024 Jun 8;31(2):298-308. doi: 10.3390/pathophysiology31020022.Pathophysiology. 2024.PMID: 38921727 Free PMC article. 'Distraction Vaginogenesis': Preliminary Results Using a Novel Method for Vaginal Canal Expansion in Rats.Meyer H, Trosclair L, Clayton SD, O'Quin C, Connelly Z, Rieger R, Dao N, Alhaque A, Minagar A, White LA, Solitro G, Shah-Bruce M, Welch VL, Villalba S, Alexander JS, Sorrells D.Meyer H, et al.Bioengineering (Basel). 2023 Mar 12;10(3):351. doi: 10.3390/bioengineering10030351.Bioengineering (Basel). 2023.PMID: 36978742 Free PMC article. Comparison of modified McIndoe and Davydov vaginoplasty in patients with MRKH syndrome in terms of anatomical results, sexual performance and satisfaction.Deldar-Pesikhani M, Ghanbari Z, Shahrbabaki FS, Nassiri S, Raznahan M, Shokrpour M.Deldar-Pesikhani M, et al.J Family Med Prim Care. 2022 Aug;11(8):4614-4618. doi: 10.4103/jfmpc.jfmpc_1939_21. Epub 2022 Aug 30.J Family Med Prim Care. 2022.PMID: 36352939 Free PMC article. A 13-Year-Old Girl with Caudal Regression Syndrome and Distal Vaginal Atresia: A Case Report.Hundarova K, Geraldes F, Águas F, Rodrigues Â.Hundarova K, et al.Am J Case Rep. 2024 Feb 20;25:e942748. doi: 10.12659/AJCR.942748.Am J Case Rep. 2024.PMID: 38374616 Free PMC article. See all "Cited by" articles Publication types Review Actions Search in PubMed Search in MeSH Add to Search MeSH terms 46, XX Disorders of Sex Development / diagnosis Actions Search in PubMed Search in MeSH Add to Search 46, XX Disorders of Sex Development / surgery Actions Search in PubMed Search in MeSH Add to Search Advisory Committees Actions Search in PubMed Search in MeSH Add to Search Congenital Abnormalities / diagnosis Actions Search in PubMed Search in MeSH Add to Search Congenital Abnormalities / surgery Actions Search in PubMed Search in MeSH Add to Search Female Actions Search in PubMed Search in MeSH Add to Search Humans Actions Search in PubMed Search in MeSH Add to Search Mullerian Ducts / abnormalities Actions Search in PubMed Search in MeSH Add to Search Mullerian Ducts / surgery Actions Search in PubMed Search in MeSH Add to Search Plastic Surgery Procedures / methods Actions Search in PubMed Search in MeSH Add to Search Practice Guidelines as Topic Actions Search in PubMed Search in MeSH Add to Search Pregnancy Actions Search in PubMed Search in MeSH Add to Search Quality of Life Actions Search in PubMed Search in MeSH Add to Search Treatment Outcome Actions Search in PubMed Search in MeSH Add to Search United States Actions Search in PubMed Search in MeSH Add to Search Urogenital Abnormalities / diagnosis Actions Search in PubMed Search in MeSH Add to Search Urogenital Abnormalities / therapy Actions Search in PubMed Search in MeSH Add to Search Supplementary concepts Mullerian aplasia Actions Search in PubMed Search in MeSH Add to Search Related information MedGen LinkOut - more resources Other Literature Sources scite Smart Citations [x] Cite Copy Download .nbib.nbib Format: Send To Clipboard Email Save My Bibliography Collections Citation Manager [x] NCBI Literature Resources MeSHPMCBookshelfDisclaimer The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited. Follow NCBI Connect with NLM National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov
188856
https://stats.stackexchange.com/questions/221780/are-log-difference-time-series-models-better-than-growth-rates
Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Are log difference time series models better than growth rates? Ask Question Asked Modified 8 years, 2 months ago Viewed 25k times 24 $\begingroup$ Often I see authors estimate a "log difference" model, e.g. $\log (y_t)-\log(y_{t-1}) = \log(y_t/y_{t-1}) = \alpha + \beta x_t$ I agree this is appropriate to relate $x_t$ to a percentage change in $y_t$ while $\log (y_t)$ is $I(1)$. But the log difference is an approximation, and it seems one could just as well estimate a model without the log transformation, e.g. $y_t/y_{t-1} -1 = (y_t - y_{t-1}) / y_{t-1}=\alpha+\beta x_t$ Moreover the growth rate would precisely describe the percent change, while the log difference would only approximate the percent change. However, I've found the log difference approach is used much more often. In fact, using the growth rate $y_t/y_{t-1}$ seems just as appropriate to address stationarity as taking the first difference. In fact, I have found that forecasting becomes biased (sometimes called the retransformation problem in the literature) when transforming the log variable back to the level data. What are the benefits to using the log difference compared to the growth rate? Are there any inherent problems with the growth rate transformation? I'm guessing I am missing something, otherwise it would seem obvious to use that approach more often. time-series forecasting data-transformation econometrics logarithm Share Improve this question asked Jul 2, 2016 at 2:54 A. SmithA. Smith 24111 gold badge22 silver badges44 bronze badges $\endgroup$ 2 1 $\begingroup$ Thank you for your comments. I agree the symmetry and bounding is a significant advantage. It seems the bounding would help control heteroskedasticity and the symmetry would help hold the mean constant. $\endgroup$ A. Smith – A. Smith 2016-07-02 15:46:19 +00:00 Commented Jul 2, 2016 at 15:46 3 $\begingroup$ The log-difference is not an approximation. It is a continuously compounded or exponential growth rate, as opposed to a period-over-period rate. They are different things. Laypersons understand the second one better, but the first one has cleaner mathematical properties (e.g. average growth is just the mean of the growth rates, growth rate of product is the sum of the rates, etc). The bit about forecasting is either unnecessary transformation leading to explosive forecasts, or median-unbiased but not mean-unbiased, which is fine. It has nothing to do with continuous vs. period rates. $\endgroup$ Chris Haug – Chris Haug 2017-07-22 15:58:51 +00:00 Commented Jul 22, 2017 at 15:58 Add a comment | 2 Answers 2 Reset to default 22 $\begingroup$ One major advantage of log-differences is symmetry: if you have a log difference of $0.1$ today and one of $-0.1$ tomorrow, you are back from where you started. In contrast, 10% growth today and 10% decline tomorrow will not bring you back to the initial value. Share Improve this answer answered Jul 2, 2016 at 11:54 Christoph HanckChristoph Hanck 36.1k44 gold badges8181 silver badges145145 bronze badges $\endgroup$ 4 10 $\begingroup$ The symmetry / bounding is the main advantage I see. Going from 100 to 10 is a log10 difference of -1, but -90%. Going from 100 to 1000 is also a log difference of 1, but 900%. A linear model is going to pay inordinate attention to that 900% observation. $\endgroup$ zbicyclist – zbicyclist 2016-07-02 14:40:20 +00:00 Commented Jul 2, 2016 at 14:40 $\begingroup$ 7 years later but, a paper I'm reading points out that log-transforming the dependent variable is not innocuous. Having $ln(y)=x$ instead of $y=x$ implies an exponential relationship $y=exp(x)$. With, for example, $ln(income)=\beta_0 + \beta_1 education + u$ the transformation is reasonable because income increases nonlinearly and relative to the current level of education (e.g., 3 years of PhD are more valuable than 3 years of middle school). The paper argues we should only log-transform if an exponential relationship makes sense, and not out of convenience. $\endgroup$ suckrates – suckrates 2024-01-10 19:06:40 +00:00 Commented Jan 10, 2024 at 19:06 $\begingroup$ I'd be interested to hear your thoughts on that @Christoph Hanck. It's "Guideline 1" in this paper: journals.sagepub.com/doi/full/10.1177/1094428121991907. $\endgroup$ suckrates – suckrates 2024-01-10 19:07:45 +00:00 Commented Jan 10, 2024 at 19:07 $\begingroup$ I agree with the statement. I also think it is perfectly compatible with this post, which "only" addresses two alternative ways of computing such percentage changes. $\endgroup$ Christoph Hanck – Christoph Hanck 2024-01-11 06:35:03 +00:00 Commented Jan 11, 2024 at 6:35 Add a comment | 7 $\begingroup$ Many macroeconomic indicators are tied to population growth, which is exponential, and thus have an exponential trend themselves. So the process before modelling with ARIMA, VAR or other linear methods is usually: Take logs to get a series with a linear trend Then difference to get a stationary series Share Improve this answer answered Jul 22, 2017 at 15:39 suckratessuckrates 1,12277 silver badges1717 bronze badges $\endgroup$ Add a comment | Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions time-series forecasting data-transformation econometrics logarithm See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Linked Why doesn't swapping male to female as the reference category in earnings regression change the value not just the sign on the coefficient? Related Jack-knife with time series models Capturing changing time trends Interpreting Log-Transformed Percentages in OLS If my goal is to test the absolute change of the ratios, can I compare the ratios directly without log transformation? Percentage change vs log change in a repeated measures or time series analysis Hot Network Questions Storing a session token in localstorage Find non-trivial improvement after submitting Clinical-tone story about Earth making people violent Determine which are P-cores/E-cores (Intel CPU) What is the feature between the Attendant Call and Ground Call push buttons on a B737 overhead panel? Are credit card statements required for Greece Schengen visa application? Is there a specific term to describe someone who is religious but does not necessarily believe everything that their religion teaches, and uses logic? Can a state ever, under any circumstance, execute an ICC arrest warrant in international waters? Why weren’t Prince Philip’s sisters invited to his wedding to Princess Elizabeth? Data lost/Corrupted on iCloud Do sum of natural numbers and sum of their squares represent uniquely the summands? Lingering odor presumably from bad chicken Matthew 24:5 Many will come in my name! ICC in Hague not prosecuting an individual brought before them in a questionable manner? Why is the definite article used in “Mi deporte favorito es el fútbol”? What is this chess h4 sac known as? What is the name of the 1950’s film about the new Scots lord whose relative is a frog like creature living in the ancestral home? Where is the first repetition in the cumulative hierarchy up to elementary equivalence? How to use cursed items without upsetting the player? Another way to draw RegionDifference of a cylinder and Cuboid Help understanding moment of inertia How to locate a leak in an irrigation system? Traversing a curve by portions of its arclength manage route redirects received from the default gateway more hot questions Question feed
188857
https://math.stackexchange.com/questions/3588794/if-a-prime-and-its-square-both-divide-a-number-n-prove-that-n-a2-b3
discrete mathematics - If a prime and its square both divide a number n, prove that $n=a^2 b^3$ - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more If a prime and its square both divide a number n, prove that n=a 2 b 3 n=a 2 b 3 Ask Question Asked 5 years, 6 months ago Modified5 years, 6 months ago Viewed 247 times This question shows research effort; it is useful and clear 1 Save this question. Show activity on this post. Lets call a number n n a fortified number if n>0 n>0 and for every prime number p p, if p|n p|n then p 2|n p 2|n. Given a fortified number, prove that there exists a,b a,b such that n=a 2 b 3 n=a 2 b 3. I know that this must revolve around the fundamental theorem of arithmetic, but I can't figure out how to do this proof. Some examples of fortified numbers: 1, 4, 8, 9, 16, 25, 27, 32, 36, 49, 64, 72, 81, 100, 108, 121, 125, 128, 144, 169, 196, 200 elementary-number-theory discrete-mathematics divisibility prime-factorization Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications edited Mar 21, 2020 at 5:07 Jose Arnaldo Bebita Dris 8,473 7 7 gold badges 37 37 silver badges 70 70 bronze badges asked Mar 21, 2020 at 4:10 lemons25lemons25 15 4 4 bronze badges 5 Do you allow b b to be 1 1?Ross Millikan –Ross Millikan 2020-03-21 04:14:58 +00:00 Commented Mar 21, 2020 at 4:14 a,b∈N a,b∈N so yes I think b=1 b=1 is allowed lemons25 –lemons25 2020-03-21 04:17:28 +00:00 Commented Mar 21, 2020 at 4:17 @lemons25 Welcome to Math SE. Hint: Can you prove that for every integer m≥2 m≥2, there are non-negative integers c c and d d such that m=2 c+3 d m=2 c+3 d?John Omielan –John Omielan 2020-03-21 04:17:45 +00:00 Commented Mar 21, 2020 at 4:17 @JohnOmielan yes I can!lemons25 –lemons25 2020-03-21 04:20:15 +00:00 Commented Mar 21, 2020 at 4:20 3 Let's not call them fortified, they already have two names in the literature, we don't need three. Some people call them powerful, some call them squarefull.Gerry Myerson –Gerry Myerson 2020-03-21 04:23:34 +00:00 Commented Mar 21, 2020 at 4:23 Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 2 Save this answer. Show activity on this post. You're correct this is related to the fundamental theorem of arithmetic. The statement regarding a powerful or squarefull (as Gerry Myerson's question comment suggests naming them instead of fortified) number means all of its prime factors must have a power of 2 2 or larger. You're asking to prove that all such numbers n n can be expressed using natural numbers a a and b b where n=a 2 b 3(1)(1)n=a 2 b 3 This involves basically proving that for every integer m≥2 m≥2, there are non-negative integers c c and d d such that m=2 c+3 d(2)(2)m=2 c+3 d You state in your comment you can do this, so I'll leave that to you, as it's fairly easy to do, such as with induction. Also note it's related to a particular case of the coin problem which shows the maximum number that can't be expressed by (2)(2) is a 1 a 2−a 1−a 2=2(3)−2−3=1 a 1 a 2−a 1−a 2=2(3)−2−3=1, so anything 2 2 or larger can. With this, consider each prime factor p p of n n. Consider the p-adic valuation function v p(m)=e v p(m)=e which gives, for any prime p p, the highest exponent e e such that p e∣m p e∣m. Then for any prime factor p p of n n, have v p(n)=m v p(n)=m, with v p(a)=c v p(a)=c and v p(b)=d v p(b)=d. Do this for all p∣n p∣n to create the corresponding prime factors of a a and b b, and with both a a and b b not having any other prime factors. You will then end up with (1)(1). Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Mar 21, 2020 at 4:47 answered Mar 21, 2020 at 4:24 John OmielanJohn Omielan 53k 5 5 gold badges 39 39 silver badges 88 88 bronze badges 7 When you say "let m m be the power of p p in n n", you mean chose m m such that p m|n p m|n right?lemons25 –lemons25 2020-03-21 04:30:48 +00:00 Commented Mar 21, 2020 at 4:30 @lemons25 Yes, that is basically what I meant, with m m being the largest power where p m∣n p m∣n. To help make that more clear, I updated my answer to add the detail of "the prime factorization" for the powers.John Omielan –John Omielan 2020-03-21 04:33:06 +00:00 Commented Mar 21, 2020 at 4:33 @lemons25 As it's more standard in number theory, plus it makes the description more succinct (and possibly easier to follow), I changed the answer again to use the p-adic valuation function.John Omielan –John Omielan 2020-03-21 04:43:04 +00:00 Commented Mar 21, 2020 at 4:43 +1 for explaining that this is an instance of the coin problem more than anything else.Jyrki Lahtonen –Jyrki Lahtonen 2020-03-21 05:06:27 +00:00 Commented Mar 21, 2020 at 5:06 1 @JohnOmielan thanks a lot. this was great!lemons25 –lemons25 2020-03-21 05:07:21 +00:00 Commented Mar 21, 2020 at 5:07 |Show 2 more comments This answer is useful 2 Save this answer. Show activity on this post. Hintn=a 2 b 3 n=a 2 b 3 for b=b= product of primes that occur to o d d o d d power in n n and a=n/b 3−−−−√,a=n/b 3, an integer since all primes in n/b 3 n/b 3 occur to even power, e.g. 2 7 3 4 5 3 7 2=(2 2 3 2 7)2(2⋅5)3 2 7 3 4 5 3 7 2=(2 2 3 2 7)2(2⋅5)3 Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Mar 21, 2020 at 5:50 answered Mar 21, 2020 at 5:34 Bill DubuqueBill Dubuque 284k 42 42 gold badges 339 339 silver badges 1k 1k bronze badges 2 i.e. to force an odd (expt) >2>2 to be even, simply subtract 3 3 from it, by moving a p 3 p 3 factor into b 3.b 3. It's simply the squarefree decomposition of n=c d 2 n=c d 2 modified to c 3(d/c)2 c 3(d/c)2 in the case c∣d c∣d Bill Dubuque –Bill Dubuque 2020-03-21 06:01:42 +00:00 Commented Mar 21, 2020 at 6:01 A simple description. I wasn't looking for one for my first reaction was to view it as a coin problem. Not much difference, but this approach makes the answer more explicit.Jyrki Lahtonen –Jyrki Lahtonen 2020-03-21 07:13:03 +00:00 Commented Mar 21, 2020 at 7:13 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions elementary-number-theory discrete-mathematics divisibility prime-factorization See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Related 3Prove that for A k A k and A k+1 A k+1 when A n=n 2+3 A n=n 2+3, their largest common prime factor ≤13≤13 3The sum of the squares of the prime factors 2Does OEIS sequence A059046 contain any odd squares u 2 u 2, with ω(u)≥2 ω(u)≥2? 18Is the smallest non-1 divisor of a number always prime? 3How to prove that if n∈N n∈N is not a perfect square, then there is no rational q q s.t q 2=n q 2=n 0Prime number circle distribution 1How many of the integers between 100 100 and 200 200 are divisible by 3 3 or divisible by 2 2 but not by 5 5? 0If n is a positive integer that is four digits long and is relatively prime to 100!, why must n be prime? 2Density of numbers which are the product of distinct primes raised to prime powers 1Name of this sequence of numbers? Hot Network Questions Passengers on a flight vote on the destination, "It's democracy!" Change default Firefox open file directory Is it possible that heinous sins result in a hellish life as a person, NOT always animal birth? Does the curvature engine's wake really last forever? Determine which are P-cores/E-cores (Intel CPU) My dissertation is wrong, but I already defended. How to remedy? Overfilled my oil Can a cleric gain the intended benefit from the Extra Spell feat? Why are LDS temple garments secret? What NBA rule caused officials to reset the game clock to 0.3 seconds when a spectator caught the ball with 0.1 seconds left? How to start explorer with C: drive selected and shown in folder list? What is a "non-reversible filter"? How to locate a leak in an irrigation system? How to rsync a large file by comparing earlier versions on the sending end? Is direct sum of finite spectra cancellative? I'm having a hard time intuiting throttle position to engine rpm consistency between gears -- why do cars behave in this observed way? RTC battery and VCC switching circuit how do I remove a item from the applications menu Implications of using a stream cipher as KDF How can the problem of a warlock with two spell slots be solved? Is there a specific term to describe someone who is religious but does not necessarily believe everything that their religion teaches, and uses logic? Analog story - nuclear bombs used to neutralize global warming How long would it take for me to get all the items in Bongo Cat? Who is the target audience of Netanyahu's speech at the United Nations? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
188858
https://mathinsight.org/derivative_quadratic_function
Skip to navigation (Press Enter) Math Insight To create your own interactive content like this, check out our new web site doenet.org! Calculating the derivative of a quadratic function Calculating the derivative of a quadratic function. More information about video. In the below applet, you can change the function to $f(x)=3x^2$ or another quadratic function to explore its derivative. To enter $f(x)=3x^2$, you can type 3x^2 in the box for $f(x)$. The derivative of a function. The function $f(x)$ is plotted by the thick blue curve. Its derivative $f'(x)$ is shown by the thin green curve. The large red diamond on the graph of $f$ represents a point $(x_0,f(x_0))$, and you can change $x_0$ by dragging this point with your mouse. A tangent line to $f$ calculated at $x=x_0$ is shown by the red line. Its slope is the derivative $f'(x_0)$ of $f$ evaluated at $x=x_0$. This slope is also displayed by the smaller red diamond on the graph of $f'$, which is at the point $(x_0,f'(x_0))$. As you change $x_0$, this smaller diamond representing the slope traces out the graph of the derivative. You can change $f(x)$ by typing a new value in its box. The value of $f'(x)$ is displayed to the right of the box. You can hide items by unchecking the corresponding check boxes in order to test yourself on how well you can determine the derivative from the function or vice versa. You can use the buttons at the top to zoom in and out as well as pan the view. More information about applet. Thread navigation Math 1241, Fall 2020 Previous: Calculating the derivative of a linear function using the derivative formula Next: Problem set: Derivative from limit definition Math 201, Spring 22 Previous: Calculating the derivative of a linear function using the derivative formula Next: Worksheet: Derivative from limit definition Similar pages Developing intuition about the derivative The idea of the derivative of a function Elementary derivative problems Calculating the derivative of a linear function using the derivative formula Introduction to differentiability in higher dimensions The multivariable linear approximation Derivatives of polynomials Derivatives of more general power functions A refresher on the quotient rule A refresher on the product rule More similar pages See also Calculating the derivative of a linear function using the derivative formula
188859
https://communities.sas.com/t5/SAS-Programming/Converting-from-weeks-to-days/td-p/127600
Converting from weeks to days Hi everyone, I have a variable called "Weeks_Days" which is in weeks with the decimal part indicating number of days (e.g. 20.3 = 20 weeks and three days = 207+3 = 143 days) Is that a way to convert those data in days? 10 ==> 70 20.3 ==> 143 38.6 ==> 272 Cheers, Jess Re: Converting from weeks to days NumOfDays = (7int(Week_Days)) + (10(Week_Days - int(Week_Days))); View solution in original post Re: Converting from weeks to days data test; week_days=38.6; days=10(week_days-floor(week_days)); weeks=floor(week_days); num_days=(weeks7)+days; run; Re: Converting from weeks to days Sounds like a goid reason to learn about compiling user-defined functions with PROC FCMP Unless there is a base-7 counting solution like for octal and binary peterC Re: Converting from weeks to days I would suggest: days = 7int(weeks_days) + int(10mod(weeks_days, 1)); Re: Converting from weeks to days Ksharp Re: Converting from weeks to days NumOfDays = (7int(Week_Days)) + (10(Week_Days - int(Week_Days))); hackathon24-white-horiz.png The 2025 SAS Hackathon has begun! It's finally time to hack! Remember to visit the SAS Hacker's Hub regularly for news and updates. Latest Updates Learn how use the CAT functions in SAS to join values from multiple variables into a single value. Find more tutorials on the SAS Users YouTube channel. SAS Training: Just a Click Away Ready to level-up your skills? Choose your own adventure. Browse our catalog!
188860
https://www.homesweetlearning.com/resources/sciences/electromagnetism.html
Electromagnetism Homesweet Learning helps students learn! HomeProgramsResourcesNews&EventsAbout UsContact UsSearchLog In HOMESciencesKinematicsInertia & MassLaw of InertiaNewton's Third Law of MotionNewton's Second Law of MotionEnergyLinear MomentumRotational MotionSimple Harmonic MotionGravitationWavesBoundary BehaviorInterference of WavesDoppler EffectSound WavesElectrostatics and Electric CircuitsElectromagnetismWireless ChargingThermodynamicsFluidsGeometrical OpticsLight as Electromegnetic WaveQuantum PhysicsAtomic and Nuclear Physics3-D ObjectsKenetic Molecular Theory of GasesVirtual ImageWhat Is Force?Speed and VelocityScalars and VectorsDistance and DisplacementSpeed, Time, DistanceAccelerationImages in Plane MirrorsMichael Smith Science ChallengeABOUT USCONTACT US ExpandCollapseSearch Facebook Twitter YouTube Messenger Email Us Log In HOMESciencesKinematicsInertia & MassLaw of InertiaNewton's Third Law of MotionNewton's Second Law of MotionEnergyLinear MomentumRotational MotionSimple Harmonic MotionGravitationWavesBoundary BehaviorInterference of WavesDoppler EffectSound WavesElectrostatics and Electric CircuitsElectromagnetismWireless ChargingThermodynamicsFluidsGeometrical OpticsLight as Electromegnetic WaveQuantum PhysicsAtomic and Nuclear Physics3-D ObjectsKenetic Molecular Theory of GasesVirtual ImageWhat Is Force?Speed and VelocityScalars and VectorsDistance and DisplacementSpeed, Time, DistanceAccelerationImages in Plane MirrorsMichael Smith Science ChallengeABOUT USCONTACT US Electromagnetism Magnetic Field Lines Oersted's Principle (Principle of Electromagnetism) Whenever a charge moves through a stright conductor, a circular magnetic field will be created around the conductor. Moving electric charges produces a magnetic field. Right-Hand Rule and Left-Hand Rule for Magnetic Field Line Direction If you hold a straight conductor in your right hand with your right thumb pointing to the direcction of the current (direction of the positive charge), your curled fingers will point in the direction of the magnetic field lines. If you hold a straight conductor in your left hand with your left thumb pointing to the direction of the electron flow (direction of the negative charge), your curled fingers will point in the direction of the magnetic field lines. Magnetic fields due to currents in a long straight wire Direction determined by RHR: grasp the wire with your right hand. If your thumb points in the direction of the current, your fingers will curl around the wire in the same direction as the magnetic field. Strength at distance r from wire: B = μI / 2πr, where I is current, μ is the permiability of the material around the wire (in units of newtons per square ampere N/A 2). In air, close to vacuum, μ = 4π 10-7 N/A 2, so we also have B = μ 0 I / 2πr Solenoid & Right-Hand Rule for Solenoid Winding a conductor into a coil containing several loops will give a solenoid. The magnetic field around a solenoid is similar to that of a bar magnet. To determine the direction of the magnetic file lines around a solenoid, wrap the fingers of your right-hand around the coil in the direction of the conventioinal current, your right thumb will point in the direction of the north magnetic pole of the coil. The strength of a solenoid's magnetic field can be increased by increasing the number of loops, increasing the amount of electric current, inlcuding a soft-iron core, or any combination of these. Magnetic field strength inside the solenoid is B = μ N/L I, where μ is the permiability of the core, N is number of loops, L is length, and I is current. Magnetic Force Any MOVING charge will produce a magnetic field around it and any MOVINGcharge placed in an external magnetic field (however it is produced) will experience a magnetic force on it, due to the interaction of the 2 magnetic fields. Magnetic force depends on the velocity of the charge. Gravitational and electric forces do not depend on the velocity of the charge/mass. Magnetic force is proportional to the charge and the velocity of the charge, given the external magnetic field strength. Magnetic Force on a Free Moving Charge in a Magnetic Field The magnetic force on a free moving charge F is perpendicular to both the velocity of the charge and the magnetic field line direction. The magnetic force direction can be given by the right hand rule. The magnetic force magnitude is given by the charge (this is a scalar quantity) times the vector product of velocity and magnetic field (intensity). F = q v B sinθ Where q is the charge, v is the velocity of the charge, B is external magnetic field strength (q is scalar, v, B are vectors), and theta θ is the angle between v and B. Also remember the direction of the magnetic force on a free moving charge is perpendicular to the plane formed by v and B The right hand rule, RHR, for determining the direction of the magnetic force experienced by a moving positive charge in a magnetic field is: thumb points in the direction of a moving positive charge's velocity, v fingers point in the direction of magnetic field, B palm faces in the direction of the magnetic force, F This right hand rule only applies to positive charges. You would need to use an equivalent left hand rule for electrons. Or just remember that if the force would be "up" for a positive charge, then the force will be "down" for a negative charge. That is, the force on a negative charge will always act 180º in the opposite direction. Magnetic Force on a Current-Carrying Conductor in a Magnetic Field Magnetic force on a current-carrying conductor in a manetic field F m is the product of the magnetic field intensity (B, a vector), the length of the conductor (L, a scalar), the current in the conductor (I, a vector), and the sine of the angle that the electric current makes with the magnetic field vector. F m = I L B sinθ The right-hand rule: Charges and Uniform Circular Motion If a free charge moves into a magnetic field with direction perpendicular to the field, it will follow a circular path. The magnetic force, being perpendicular to the velocity, provides the centripetal force. In the diagram below, a negatively charged particle moves in the plane of the page in a region where the magnetic field is perpendicular into the page (represented by the small circles with x’s—like the tails of arrows). The magnetic force is perpendicular to the velocity, and so velocity changes in direction but not magnitude. Uniform circular motion results. In the below equation, r is radius of the circle, v is velocity, and m is mass of the charge. To find the mass of the charge: m = qBr/v Ampere's Experiment If the current directions in 2 parallel wires are the same, then the produced magnetic field lines will go in opposite directions, and an attractive magnetic force will result. This means the 2 wires will attract each other. If the current directions in 2 parallel wires are in different directions, then the produced magnetic field lines will go in the same direction, and an opposing magnetic force will result. This means the 2 wires will repel each other. Unit of Magnetic Field Strength Unit of mgnetic field strength is T for Tesla. Formula 1: 1 T = 1 kg / C s, where C stands for coulomb, and s stands for second. Formula 2: 1 T = 1 N / A m, where A stands for Ampere, and m stands for meter, and N stands for Newton. Explanation of Formula 1: Remeber electric field intensity is Fe/C because Fe is proportional to C. Since Fm is proportional to both C and velocity of C, therefore, magnetic field strength/intensity = Fm / CV = Newton / C (m/s) = kg (m/s 2) / C (m/s) = kg / C s, where m is meter. Explanation of Formula 2: This definition is based on the fact that if two 1-meter long current-carrying wire segments,each having 1 ampere of current,are either attracted to or repelled from each other by one newton of force, then the strength of the magnetic field produced by either current-carrying segment equals 1 tesla at the location of the other wire. Direct Current Motor Law of Electromagnetic Induction Moving a straight line conductor in a magnetic field (as shown above) will induce a potential difference between the conductor's 2 ends and if the conductor is connected to a closed circuit there will be a current in the circuit: When the wire rests in the magnetic field, there is no current. When the wire is moved through the field in parallel to the field (θ=0°), there is no current. When the wire is moved through the field that "cuts across" the field lines at angle θ, there will be a small current When the wire cuts across the field lines at right angles (θ=90°), there will be maximum current. When the wire cuts across the field lines up and down, the current direction alternates. Direction of the induced current determined by RHR rule: fingers of the right hand point in the direction of the magnetic lines; thumb points in the direction of the velocity of the wire; palm will indicate the direction of the induced current. Induced potential difference is called called motional emf, and when the wire cuts across the magnetic field lines at right angle, we have emf = B L V sinθ, where B is magnetic field intensity (Tesla), L is wire length, and V is wire velocity. For example: suppose we have a wire 0.5m long, moving at right angles to a 0.04-T magnetic field with a velocity of 5 m/s. Then the induced emf will be emf = (0.04T)(0.5m)(5m/s) = 0.1V NOTE: emf = B L V sinθcan be explained by the Faraday's lawemf = -N(ΔΦ/Δt) below. Here N=1; ΔΦ/Δt = B A cos(90-θ) / Δt = B L x sinθ / Δt = B L V sinθ. Notice the θ used in emf = B L V sinθ is complementary to the θ used in emf = -N(ΔΦ/Δt); also notice area A = L x, where x is the distance by which you move the line in the direction of v. Magnetic flux, magnetic flux density, and Faraday's law of electromagnetic induction Suppose we have a coil made up of N turns of wire and the coil has a cross-sectional area of A. It cuts across a magnetic field with field intensity B at angle θ. Then we can define the magnetic flux Φ as Φ = B A cosθ. The unit of magnetic flux is weber, 1 weber = 1 tesla square meter. So what is magnetic flux density? Magnetic flux density is just magnetic field intensity: from Φ = B A cosθ, we have B = Φ / (Acosθ), and the unit of magnetic flux density is webers per square meter (Wb/m 2). Faraday's law of electromagnetic induction states that the motional emf = -N(ΔΦ/Δt), where ΔΦ is change in magnetic flux, and Δt is change in time, and N is the number of turns of wire in the coil. From Faraday's law, as long as there are magnetic field changes, even if there is no relative velocity between the coil and the external magnetic field, there will be induced emf. Variations of Farady's Law Material adapted from In example 1, two coils are penetrated by a changing magnetic field. Magnetic flux Φ is defined by Φ=BA where B is the magnetic field or average magnetic field and A is the area perpendicular to the magnetic field. Note that for a given rate of change of the flux through the coil, the voltage generated is proportional to the number of turns N which the flux penetrates. In example 2, the voltage is generated when a coil is moved into a magnetic field. This is sometimes called "motional emf", and is proportional to the speed with which the coil is moved into the magnetic field. That speed can be expressed in terms of the rate of change of the area which is in the magnetic field. In example 3, we see the standard AC generator geometry where a coil of wire is rotated in a magnetic field. The rotation changes the perpendicular area of the coil with respect to the magnetic field and generates a voltage proportional to the instantaneous rate of change of the magnetic flux. For a constant rotational speed, the voltage generated is sinusoidal. Here is another diagram for example 3: In example 4, voltage is generated by moving a magnet toward or away from a coil of wire. With the area constant, the changing magnetic field causes a voltage to the generated. The direction or "sense" of the voltage generated is such that any resulting current produces a magnetic field opposing the change in magnetic field which created it (Lenz's law). Lenz's Law If a changing magnetic field induces a current in a coil, the electric current is in such a direction that its own magnetic field opposes the change that produced it. DC motor and generator Alternating Current Alternating current is an electric current that periodically reverses direction. In an alternating current, voltage changes periodically between maximum positive value to maximum negative value crossing 0 value. At 0 voltage, lights will get dimmer but at around 60 cycles per secondd (60 Hz), people cannot detect them. Ohm's law is applicable to alternating current as well. Galvanometer, Ammeter, Voltmeter Galvanometer is made based on Motor Principle. Ammeter and voltmeter make use of galvanometer, Ohm's law, and the principles of parallel circuits. Homesweet Learning Inc. provides homework & test-prep tutoring and enrichment programs to school-age children. Its live online tutoring and enrichment programs cover math, English, sciences, computer programming, SSAT, SAT & ACT test prep, chess, French, Spanish, etc. While using this site, you agree to have read and accepted our terms of use and privacy policy. © Copyright 2009-2020 by Homesweet Learning Inc. All Rights Reserved.  You agree to have read and accepted our terms of use and privacy policy. © Copyright 2009-2020 by Homesweet Learning Inc. All Rights Reserved.
188861
https://cayley.academic.csusb.edu/content/notes/vc_notes.pdf
Lectures on Vector Calculus Paul Renteln Department of Physics California State University San Bernardino, CA 92407 March, 2009; Revised March, 2011 c ⃝Paul Renteln, 2009, 2011 ii Contents 1 Vector Algebra and Index Notation 1 1.1 Orthonormality and the Kronecker Delta . . . . . . . . . . . . 1 1.2 Vector Components and Dummy Indices . . . . . . . . . . . . 4 1.3 Vector Algebra I: Dot Product . . . . . . . . . . . . . . . . . . 8 1.4 The Einstein Summation Convention . . . . . . . . . . . . . . 10 1.5 Dot Products and Lengths . . . . . . . . . . . . . . . . . . . . 11 1.6 Dot Products and Angles . . . . . . . . . . . . . . . . . . . . . 12 1.7 Angles, Rotations, and Matrices . . . . . . . . . . . . . . . . . 13 1.8 Vector Algebra II: Cross Products and the Levi Civita Symbol 18 1.9 Products of Epsilon Symbols . . . . . . . . . . . . . . . . . . . 23 1.10 Determinants and Epsilon Symbols . . . . . . . . . . . . . . . 27 1.11 Vector Algebra III: Tensor Product . . . . . . . . . . . . . . . 28 1.12 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2 Vector Calculus I 32 2.1 Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.2 The Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.3 Lagrange Multipliers . . . . . . . . . . . . . . . . . . . . . . . 37 2.4 The Divergence . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.5 The Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.6 The Curl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.7 Vector Calculus with Indices . . . . . . . . . . . . . . . . . . . 43 2.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3 Vector Calculus II: Other Coordinate Systems 48 3.1 Change of Variables from Cartesian to Spherical Polar . . . . 48 iii 3.2 Vector Fields and Derivations . . . . . . . . . . . . . . . . . . 49 3.3 Derivatives of Unit Vectors . . . . . . . . . . . . . . . . . . . . 53 3.4 Vector Components in a Non-Cartesian Basis . . . . . . . . . 54 3.5 Vector Operators in Spherical Coordinates . . . . . . . . . . . 54 3.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4 Vector Calculus III: Integration 57 4.1 Line Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2 Surface Integrals . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.3 Volume Integrals . . . . . . . . . . . . . . . . . . . . . . . . . 67 4.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5 Integral Theorems 70 5.1 Green’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.2 Stokes’ Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.3 Gauss’ Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 74 5.4 The Generalized Stokes’ Theorem . . . . . . . . . . . . . . . . 74 5.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 A Permutations 76 B Determinants 77 B.1 The Determinant as a Multilinear Map . . . . . . . . . . . . . 79 B.2 Cofactors and the Adjugate . . . . . . . . . . . . . . . . . . . 82 B.3 The Determinant as Multiplicative Homomorphism . . . . . . 86 B.4 Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 iv List of Figures 1 Active versus passive rotations in the plane . . . . . . . . . . . 13 2 Two vectors spanning a parallelogram . . . . . . . . . . . . . . 20 3 Three vectors spanning a parallelepiped . . . . . . . . . . . . . 20 4 Reflection through a plane . . . . . . . . . . . . . . . . . . . . 31 5 An observer moving along a curve through a scalar field . . . . 33 6 Some level surfaces of a scalar field ϕ . . . . . . . . . . . . . . 35 7 Gradients and level surfaces . . . . . . . . . . . . . . . . . . . 36 8 A hyperbola meets some level surfaces of d . . . . . . . . . . . 37 9 Spherical polar coordinates and corresponding unit vectors . . 49 10 A parameterized surface . . . . . . . . . . . . . . . . . . . . . 65 v 1 Vector Algebra and Index Notation 1.1 Orthonormality and the Kronecker Delta We begin with three dimensional Euclidean space R3. In R3 we can define three special coordinate vectors ˆ e1, ˆ e2, and ˆ e3. 1 We choose these vectors to be orthonormal, which is to say, both orthogonal and normalized (to unity). We may express these conditions mathematically by means of the dot product or scalar product as follows: ˆ e1 · ˆ e2 = ˆ e2 · ˆ e1 = 0 ˆ e2 · ˆ e3 = ˆ e3 · ˆ e2 = 0 (orthogonality) (1.1) ˆ e1 · ˆ e3 = ˆ e3 · ˆ e1 = 0 and ˆ e1 · ˆ e1 = ˆ e2 · ˆ e2 = ˆ e3 · ˆ e3 = 1 (normalization). (1.2) To save writing, we will abbreviate these equations using dummy indices instead. (They are called ‘indices’ because they index something, and they are called ‘dummy’ because the exact letter used is irrelevant.) In index notation, then, I claim that the conditions (1.1) and (1.2) may be written ˆ ei · ˆ ej = δij. (1.3) How are we to understand this equation? Well, for starters, this equation is really nine equations rolled into one! The index i can assume the values 1, 2, or 3, so we say “i runs from 1 to 3”, and similarly for j. The equation is 1These vectors are also denoted ˆ ı, ˆ , and ˆ k, or ˆ x, ˆ y and ˆ z. We will use all three notations interchangeably. 1 valid for all possible choices of values for the indices. So, if we pick, say, i = 1 and j = 2, (1.3) would read ˆ e1 · ˆ e2 = δ12. (1.4) Or, if we chose i = 3 and j = 1, (1.3) would read ˆ e3 · ˆ e1 = δ31. (1.5) Clearly, then, as i and j each run from 1 to 3, there are nine possible choices for the values of the index pair i and j on each side, hence nine equations. The object on the right hand side of (1.3) is called the Kronecker delta. It is defined as follows: δij =      1 if i = j, 0 otherwise. (1.6) The Kronecker delta assumes nine possible values, depending on the choices for i and j. For example, if i = 1 and j = 2 we have δ12 = 0, because i and j are not equal. If i = 2 and j = 2, then we get δ22 = 1, and so on. A convenient way of remembering the definition (1.6) is to imagine the Kronecker delta as a 3 by 3 matrix, where the first index represents the row number and the second index represents the column number. Then we could write (abusing notation slightly) δij =     1 0 0 0 1 0 0 0 1    . (1.7) 2 Finally, then, we can understand Equation (1.3): it is just a shorthand way of writing the nine equations (1.1) and (1.2). For example, if we choose i = 2 and j = 3 in (1.3), we get ˆ e2 · ˆ e3 = 0, (1.8) (because δ23 = 0 by definition of the Kronecker delta). This is just one of the equations in (1.1). Letting i and j run from 1 to 3, we get all the nine orthornormality conditions on the basis vectors ˆ e1, ˆ e2 and ˆ e3. Remark. It is easy to see from the definition (1.6) or from (1.7) that the Kronecker delta is what we call symmetric. That is δij = δji. (1.9) Hence we could have written Equation (1.3) as ˆ ei · ˆ ej = δji. (1.10) (In general, you must pay careful attention to the order in which the indices appear in an equation.) Remark. We could have written Equation (1.3) as ˆ ea · ˆ eb = δab, (1.11) which employs the letters a and b instead of i and j. The meaning of the equation is exactly the same as before. The only difference is in the labels of the indices. This is why they are called ‘dummy’ indices. 3 Remark. We cannot write, for instance, ˆ ei · ˆ ea = δij, (1.12) as this equation makes no sense. Because all the dummy indices appearing in (1.3) are what we call free (see below), they must match exactly on both sides. Later we will consider what happens when the indices are not all free. 1.2 Vector Components and Dummy Indices Let A be a vector in R3. As the set {ˆ ei} forms a basis for R3, the vector A may be written as a linear combination of the ˆ ei: A = A1ˆ e1 + A2ˆ e2 + A3ˆ e3. (1.13) The three numbers Ai, i = 1, 2, 3, are called the (Cartesian) components of the vector A. We may rewrite Equation (1.13) using indices as follows: A = 3 X i=1 Aiˆ ei. (1.14) As we already know that i runs from 1 to 3, we usually omit the limits from the summation symbol and just write A = X i Aiˆ ei. (1.15) Later we will abbreviate this expression further. Using indices allows us to shorten many computations with vectors. For 4 example, let us prove the following formula for the components of a vector: Aj = ˆ ej · A. (1.16) We proceed as follows: ˆ ej · A = ˆ ej · X i Aiˆ ei ! (1.17) = X i Ai(ˆ ej · ˆ ei) (1.18) = X i Aiδij (1.19) = Aj. (1.20) In Equation (1.17) we simply substituted Equation (1.15). In Equation (1.18) we used the linearity of the dot product, which basically says that we can distribute the dot product over addition, and scalars pull out. That is, dot products are products between vectors, so any scalars originally multiplying vectors just move out of the way, and only multiply the final result. Equation (1.19) employed Equation (1.3) and the symmetry of δij. It is Equation (1.20) that sometimes confuses the beginner. To see how the transition from (1.19) to (1.20) works, let us look at it in more detail. The equation reads X i Aiδij = Aj. (1.21) Notice that the left hand side is a sum over i, and not i and j. We say that the index j in this equation is “free”, because it is not summed over. As j is free, we are free to choose any value for it, from 1 to 3. Hence (1.21) is really three equations in one (as is Equation (1.16)). Suppose we choose j = 1. 5 Then written out in full, Equation (1.21) becomes A1δ11 + A2δ21 + A3δ31 = A1. (1.22) Substituting the values of the Kronecker delta yields the identity A1 = A1, which is correct. You should convince yourself that the other two cases work out as well. That is, no matter what value of j we choose, the left hand side of (1.21) (which involved the sum with the Kronecker delta) always equals the right hand side. Looking at (1.21) again, we say that the Kronecker delta together with the summation has effected an “index substitution”, allowing us to replace the i index on the Ai with a j. In what follows we will often make this kind of index substitution without commenting. If you are wondering what happened to an index, you may want to revisit this discussion. Observe that I could have written Equation (1.16) as follows: Ai = ˆ ei · A, (1.23) using an i index rather than a j index. The equation remains true, because i, like j, can assume all the values from 1 to 3. However, the proof of (1.23) must now be different. Let’s see why. Repeating the proof line for line, but with an index i instead gives us the 6 following ˆ ei · A = ˆ ei · X i Aiˆ ei ! (1.24) = X i Ai(ˆ ei · ˆ ei)?? (1.25) = X i Aiδii?? (1.26) = Ai. (1.27) Unfortunately, the whole thing is nonsense. Well, not the whole thing. Equa-tion (1.24) is correct, but a little confusing, as an index i now appears both inside and outside the summation. Is i a free index or not? Well, there is an ambiguity, which is why you never want to write such an expression. The reason for this can be seen in (1.25), which is a mess. There are now three i indices, and it is never the case that you have a simultaneous sum over three indices like this. The sum, written out, reads A1(ˆ e1 · ˆ e1) + A2(ˆ e2 · ˆ e2) + A3(ˆ e3 · ˆ e3)?? (1.28) Equation (1.2) would allow us to reduce this expression to A1 + A2 + A3?? (1.29) which is definitely not equal to Ai under any circumstances. Equation (1.26) is equally nonsense. What went wrong? Well, the problem stems from using too many i indices. We can fix the proof, but we have to be a little more clever. The left hand side of (1.24) is fine. But instead of expressing A as a sum over i, 7 we can replace it by a sum over j! After all, the indices are just dummies. If we were to do this (which, by the way, we call “switching dummy indices”), the (correct) proof of (1.23) would now be ˆ ei · A = ˆ ei · X j Ajˆ ej ! (1.30) = X j Aj(ˆ ei · ˆ ej) (1.31) = X j Ajδij (1.32) = Ai. (1.33) You should convince yourself that every step in this proof is legitimate! 1.3 Vector Algebra I: Dot Product Vector algebra refers to doing addition and multiplication of vectors. Addi-tion is easy, but perhaps unfamiliar using indices. Suppose we are given two vectors A and B, and define C := A + B. (1.34) Then Ci = (A + B)i = Ai + Bi. (1.35) That is, the components of the sum are just the sums of the components of the addends. 8 Dot products are also easy. I claim A · B = X i AiBi. (1.36) The proof of this from (1.3) and (1.16) is as follows: A · B = X i Aiˆ ei ! · X j Bjˆ ej ! (1.37) = X ij AiBj(ˆ ei · ˆ ej) (1.38) = X ij AiBjδij (1.39) = X i AiBi. (1.40) A few observations are in order. First, (1.36) could be taken as the definition of the dot product of two vectors, from which we could derive the properties of the dot products of the basis vectors. We chose to do it this way to illustrate the computational power of index notation. Second, in Equation (1.38) the sum over the pair ij means the double sum over i and j separately. All we have done there is use the linearity of the dot product again to pull the scalars to the front and leave the vectors to multiply via the dot product. Third, in Equation (1.39) we used Equation (1.3), while in (1.40) we used the substitution property of the Kronecker delta under a sum. In this case we summed over j and left i alone. This changed the j to an i. We could have equally well summed over i and left the j alone. Then the final expression would have been X j AjBj. (1.41) 9 But, of course, X i AiBi = X j AjBj = A1B1 + A2B2 + A3B3, (1.42) so it would not have mattered. Dummy indices again! Lastly, notice that we would have gotten into big trouble had we used an i index in the sum for B instead of a j index. We would have been very confused as to which i belonged with which sum! In this case I chose an i and a j, but when you do computations like this you will have to be alert and choose your indices wisely. 1.4 The Einstein Summation Convention You can already see that more involved computations will require more in-dices, and the formulas can get a little crowded. This happened often to Einstein. Being the lazy guy he was, he wanted to simplify the writing of his formulas, so he invented a new kind of notation. He realized that he could simply erase the summation symbols, because it was always clear that, whenever two identical dummy indices appeared on the same side of an equa-tion they were always summed over. Removing the summation symbol leaves behind an expression with what we call an “implicit sum”. The sum is still there, but it is hiding. 10 As an example, let us rewrite the proof of (1.36): A · B = (Aiˆ ei) · (Bjˆ ej) (1.43) = AiBj(ˆ ei · ˆ ej) (1.44) = AiBjδij (1.45) = AiBi. (1.46) The only thing that has changed is that we have dropped the sums! We just have to tell ourselves that the sums are still there, so that any time we see two identical indices on the same side of an equation, we have to sum over them. As we were careful to use different dummy indices for the expansions of A and B, we never encounter any trouble doing these sums. But note that two identical indices on opposite sides of an equation are never summed. Having said this, I must say that there are rare instances when it becomes necessary to not sum over repeated indices. If the Einstein summation con-vention is in force, one must explicitly say “no sum over repeated indices”. I do not think we shall encounter any such computations in this course, but you never know. For now we will continue to write out the summation symbols. Later we will use the Einstein convention. 1.5 Dot Products and Lengths The (Euclidean) length of a vector A = A1ˆ e1+A2ˆ e2+A3ˆ e3 is, by definition, A = |A| = q A2 1 + A2 2 + A2 3. (1.47) 11 Hence, the squared length of A is A2 = A · A = X i A2 i . (1.48) Observe that, in this case, the Einstein summation convention can be con-fusing, because the right hand side would become simply A2 i , and we would not know whether we mean the square of the single component Ai or the sum of squares of the Ai’s. But the former interpretation would be nonsensical in this context, because A2 is clearly not the same as the square of one of its components. That is, there is only one way to interpret the equation A2 = A2 i , and that is as an implicit sum. Nevertheless, confusion still some-times persists, so under these circumstances it is usually best to either write A2 = AiAi, in which case the presence of the repeated index i clues in the reader that there is a suppressed summation sign, or else to simply restore the summation symbol. 1.6 Dot Products and Angles Let A be a vector in the plane inclined at an angle of θ to the horizontal. Then from elementary trigonometry we know that A1 = ˆ e1 · A = A cos θ where A is the length of A. It follows that if B is a vector of length B along the x axis, then B = Bˆ e1, and A · B = AB cos θ. (1.49) But now we observe that this relation must hold in general, no matter which way A and B are pointing, because we can always rotate the coordinate system until the two vectors lie in a plane with B along one axis. 12 x y v v′ θ Active x y v = v′ x′ y′ θ Passive Figure 1: Active versus passive rotations in the plane 1.7 Angles, Rotations, and Matrices This brings us naturally to the subject of rotations. There are many ways to understand rotations. A physicist understands rotations intuitively, whereas a mathematician requires a bit more rigor. We will begin with the intuitive approach, and later discuss the more rigorous version. Physicists speak of transformations as being either active or passive. Consider the rotation of a vector v in the plane. According to the active point of view, we rotate the vector and leave the coordinate system alone, whereas according to the passive point of view we leave the vector alone but rotate the coordinate system. This is illustrated in Figure 1. The two operations are physically equivalent, and we can choose whichever point of view suits us. Consider the passive point of view for a moment. How are the components of the vector v in the new coordinate system related to those in the old coordinate system? In two dimensions we can write v = v1ˆ e1 + v2ˆ e2 = v′ 1ˆ e′ 1 + v′ 2ˆ e′ 2 = v′. (1.50) 13 By taking dot products we find v′ 1 = v · ˆ e′ 1 = v1(ˆ e1 · ˆ e′ 1) + v2(ˆ e2 · ˆ e′ 1) (1.51) and v′ 2 = v · ˆ e′ 2 = v1(ˆ e1 · ˆ e′ 2) + v2(ˆ e2 · ˆ e′ 2). (1.52) It is convenient to express these equations in terms of matrices. Recall that we multiply two matrices using ‘row-column’ multiplication. If M is an m by p matrix and N is a p by n matrix, then the product matrix Q := MN is an m by n matrix whose ijth entry is the dot product of the ith row of M and the jth column of N. 2 Using indices we can express matrix multiplication as follows: Qij = (MN)ij = n X k=1 MikNkj. (1.53) You should verify that this formula gives the correct answer for matrix mul-tiplication. With this as background, observe that we can combine (1.51) and (1.52) into a single matrix equation:  v′ 1 v′ 2  =  ˆ e1 · ˆ e′ 1 ˆ e2 · ˆ e′ 1 ˆ e1 · ˆ e′ 2 ˆ e2 · ˆ e′ 2    v1 v2   (1.54) The 2 × 2 matrix appearing in (1.54), which we call R, is an example of a rotation matrix. Letting v and v′ denote the column vectors on either side of R, we can rewrite (1.54) as v′ = Rv. (1.55) 2This is why M must have the same number of columns as N has rows. 14 In terms of components, (1.55) becomes v′ i = X j Rijvj. (1.56) The matrix R is the mathematical representation of the planar rotation. Examining Figure 1, we see from (1.49) and (1.54) that the entries of R are simply related to the angle of rotation by R =  cos θ −sin θ sin θ cos θ  . (1.57) According to the active point of view, R represents a rotation of all the vectors through an angle θ in the counterclockwise direction. In this case the vector v is rotated to a new vector v′ with components v′ 1 and v′ 2 in the old coordinate system. According to passive point of view, R represents a rotation of the coor-dinate system through an angle θ in the clockwise direction. In this case the vector v remains unchanged, and the numbers v′ 1 and v′ 2 represent the components of v in the new coordinate system. Again, it makes no difference which interpretation you use, but to avoid confusion you should stick to one interpretation for the duration of any prob-lem! (In fact, as long as you just stick to the mathematics, you can usually avoid committing yourself to one interpretation or another.) We note two important properties of the rotation matrix in (1.57): RTR = I (1.58) det R = 1 (1.59) 15 Equation (1.59) just means that the matrix has unit determinant. 3 In (1.58) RT means the transpose of R, which is the matrix obtained from R by flipping it about the diagonal running from NW to SE, and I denotes the identity matrix, which consists of ones along the diagonal and zeros elsewhere. It turns out that these two properties are satisfied by any rotation matrix. To see this, we must finally define what we mean by a rotation. The definition is best understood by thinking of a rotation as an active transformation. Definition. A rotation is a linear map taking vectors to vectors that preserves lengths, angles, and handedness. The handedness condition says that a rotation must map a right handed coordinate system to a right handed coordinate system. The first two prop-erties can be expressed mathematically by saying that rotations leave the dot product of two vectors invariant. For, if v is mapped to v′ by a rotation R and w is mapped to w′ by R, then we must have v′ · w′ = v · w. (1.60) This is because, if we set w = v then (1.60) says that v′2 = v2 (where v′ = |v′| and v = |v|), so the length of v′ is the same as the length of v (and similarly, the length of w′ is the same as the length of w), and if w ̸= v then (1.60) says that v′w′ cos θ′ = vw cos θ, which, because the lengths are the same, implies that the angle between v′ and w′ is the same as the angle between v and w. Let’s see where the condition (1.60) leads. In terms of components we 3For a review of the determinant and its properties, consult Appendix B. 16 have X i v′ iw′ i = X i viwi = ⇒ X ijk (Rijvj)(Rikwk) = X jk δjkvjwk = ⇒ X jk ( X i RijRik −δjk)vjwk = 0 As the vectors v and w are arbitrary, we can conclude X i RijRik = δjk. (1.61) Note that the components of the transposed matrix RT are obtained from those of R by switching indices. That is, (RT)ij = Rji. Hence (1.61) can be written X i (RT)jiRik = δjk. (1.62) Comparing this equation to (1.53) we see that it can be written RTR = I. (1.63) Thus we see that the condition (1.63) is just another way of saying that lengths and angles are preserved by a rotation. Incidentally, yet another way of expressing (1.63) is RT = R−1, (1.64) where R−1 is the matrix inverse of R. 4 4The inverse A−1 of a matrix A satisfies AA−1 = A−1A = I. 17 Now, it is a fact that, for any two square matrices A and B, det AB = det A det B. (1.65) and det AT = det A, (1.66) (see Appendix B). Applying the two properties (1.65) and (1.66) to (1.63) gives (det R)2 = 1 ⇒ det R = ±1. (1.67) Thus, if R preserves lengths and angles then it is almost a rotation. It is a rotation if det R = 1, which is the condition of preserving handedness, and it is a roto-reflection (product of a rotation and a reflection) if det R = −1. The set of all linear transformations R satisfying (1.63) is called the orthogonal group, and the subset satisfying det R = 1 is called the special orthogonal group. 1.8 Vector Algebra II: Cross Products and the Levi Civita Symbol We have discussed the dot product, which is a way of forming a scalar from two vectors. There are other sorts of vector products, two of which are particularly relevant to physics. They are the vector or cross product, and the dyadic or tensor product. First we discuss the cross product. Let B and C be given, and define the 18 cross product B × C in terms of the following determinant: 5 B × C = ˆ e1 ˆ e2 ˆ e3 B1 B2 B3 C1 C2 C3 = (B2C3 −B3C2)ˆ e1 + (B3C1 −B1C3)ˆ e2 + (B1C2 −B2C1)ˆ e3 = (B2C3 −B3C2)ˆ e1 + cyclic. (1.68) It is clear from the definition that the cross product is antisymmetric, meaning that it flips sign if you flip the vectors: B × C = −C × B. (1.69) Just as the dot product admits a geometric interpretation, so does the cross product: the length of B × C is the area of the parallelogram spanned by the vectors B and C, and B×C points orthogonally to the parallelogram in the direction given by the right hand rule. 6 We see this as follows. Let θ be the angle between B and C. We can always rotate our vectors (or else our coordinate system) so that B lies along the x-axis and C lies somewhere in the xy plane. Then we have (see Figure 2): B = Bˆ e1 and C = C(cos θˆ e1 + sin θˆ e2), (1.70) so that B × C = BCˆ e1 × (cos θˆ e1 + sin θˆ e2) = BC sin θˆ e3. (1.71) 5The word ‘cyclic’ means that the other terms are obtained from the first term by suc-cessive cyclic permutation of the indices 1 →2 →3. For a brief discussion of permutations, see Appendix A. 6To apply the right hand rule, point your hand in the direction of B and close it in the direction of C. Your thumb will then point in the direction of B × C. 19 ˆ e1 ˆ e2 C B θ Figure 2: Two vectors spanning a parallelogram y z x C B A ψ θ Figure 3: Three vectors spanning a parallelepiped The direction is consistent with the right hand rule, and the magnitude, |B × C| = BC sin θ, (1.72) is precisely the area of the parallelogram spanned by B and C, as promised. We can now combine the geometric interpretation of the dot and cross products to get a geometric interpretation of the triple product A·(B×C): it is the volume of the parallelepiped spanned by all three vectors. Suppose A lies in the yz-plane and is inclined at an angle ψ relative to the z-axis, and that B and C lie in the xy-plane, separated by an angle θ, as shown in Figure 3. Then 20 A · (B × C) = A(cos ψˆ e3 + sin ψˆ e1) · BC sin θˆ e3 = ABC sin θ cos ψ = volume of parallelepiped = A1 A2 A3 B1 B2 B3 C1 C2 C3 , (1.73) where the last equality follows by taking the dot product of A with the cross product B × C given in (1.68). Since the determinant flips sign if two rows are interchanged, the triple product is invariant under cyclic permutations: A · (B × C) = B · (C × A) = C · (A × B). (1.74) It turns out to be convenient, when dealing with cross products, to define a new object that packages all the minus signs of a determinant in a conve-nient fashion. This object is called the Levi Civita Alternating Symbol. (It is also called a permutation symbol or the epsilon symbol. We will use any of these terms as suits us.) Formally, the Levi Civita alternating symbol εijk is a three-indexed object with the following two defining properties: i) ε123 = 1. ii) εijk changes sign whenever any two indices are interchanged. These two properties suffice to fix every value of the epsilon symbol. A priori there are 27 possible values for εijk, one for each choice of i, j, and k, each of which runs from 1 to 3. But the defining conditions eliminate most of them. For example, consider ε122. By property (ii) above, it should flip sign when 21 we flip the last two indices. But then we have ε122 = −ε122, and the only number that is equal to its negative is zero. Hence ε122 = 0. Similarly, it follows that εijk is zero whenever any two indices are the same. This means that, of the 27 possible values we started with, only 6 of them can be nonzero, namely those whose indices are permutations of (123). These nonzero values are determined by properties (i) and (ii) above. So, for example, ε312 = 1, because we can get from ε123 to ε312 by two index flips: ε312 = −ε132 = +ε123 = +1. (1.75) A moment’s thought should convince you that the epsilon symbol gives us the sign of the permutation of its indices, where the sign of a permutation is just −1 raised to the power of the number of flips of the permuation from the identity permutation (123). This explains its name ‘permutation symbol’. The connection between cross products and the alternating symbol is via the following formula: (A × B)i = X jk εijkAjBk. (1.76) To illustrate, let us choose i = 1. Then, written out in full, (1.76) reads (A × B)1 = ε111A1B1 + ε112A1B2 + ε113A1B3 + ε121A2B1 + ε122A2B2 + ε123A2B3 + ε131A3B1 + ε132A3B2 + ε133A3B3 = A2B3 −A3B2, (1.77) where the last equality follows by substituting in the values of the epsilon 22 symbols. You should check that the other two components of the cross prod-uct are given correctly as well. Observe that, using the summation conven-tion, (1.76) would be written (A × B)i = εijkAjBk. (1.78) Note also that, due to the symmetry properties of the epsilon symbol, we could also write (A × B)i = εjkiAjBk. (1.79) 1.9 Products of Epsilon Symbols There are four important product identities involving epsilon symbols. They are (using the summation convention throughout): εijkεmnp = δim δin δip δjm δjn δjp δkm δkn δkp (1.80) εijkεmnk = δimδjn −δinδjm (1.81) εijkεmjk = 2δim (1.82) εijkεijk = 3!. (1.83) The proofs of these identities are left as an exercise. To get you started, let’s prove (1.82). To begin, you must figure out which indices are free and which are summed. Well, j and k are repeated on the left hand side, so they are summed over, while i and m are both on opposite sides of the equation, so they are free. This means (1.82) represents nine equations, one for each possible pair of values for i and m. To prove the formula, we have to show 23 that, no matter what values of i and m we choose, the left side is equal to the right side. So let’s pick i = 1 and m = 2, say. Then by the definition of the Kronecker delta, the right hand side is zero. This means we must show the left hand side is also zero. For clarity, let us write out the left hand side in this case (remember, j and k are summed over, while i and m are fixed): ε111ε211 + ε112ε212 + ε113ε213 + ε121ε221 + ε122ε222 + ε123ε223 + ε131ε231 + ε132ε232 + ε133ε233. If you look carefully at this expression, you will see that it is always zero! The reason is that, in order to get something nonzero, at least one summand must be nonzero. But each summand is the product of two epsilon symbols, and because i and m are different, these two epsilon symbols are never simultaneously nonzero. The only time the first epsilon symbol in a term is nonzero is when the pair (j, k) is (2, 3) or (3, 2). But then the second epsilon symbol must vanish, as it has at least two 2s. A similar argument shows that the left hand side vanishes whenever i and m are different, and as the right hand side also vanishes under these circumstances, the two sides are always equal whenever i and m are different. What if i = m? In that case the left side is 2, because the sum includes precisely two nonzero summands, each of which has the value 1. For example, if i = m = 1, the two nonzero terms in the sum are ε123ε123 and ε132ε132, each of which is 1. But the right hand side is also 2, by the properties of the Kronecker delta. Hence the equation holds. In general, this is a miserable way to prove the identities above, because 24 you have to consider all these cases. The better way is to derive (1.81), (1.82), and (1.83) from (1.80) (which I like to call “the mother of all epsilon identities”). (The derivation of (1.80) proceeds by comparing the symmetry properties of both sides.) To demonstrate how this works, consider obtaining (1.82) from (1.81). Observe that (1.81) represents 34 = 81 equations, as i, j, m, and n are free (only k is summed). We want to somehow relate it to (1.82). This means we need to set n equal to j and sum over j. We are able to do this because (1.81) remains true for any values of j and n. So it certainly is true if n = j, and summing true equations produces another true equation. If we do this (which, by the way, is called contracting the indices j and n) we get the left hand side of (1.82). So we must show that doing the same thing to the right hand side of (1.81) (namely, setting n = j and summing over j) yields the right hand side of (1.82). If we can do this we will have completed our proof that (1.82) follows from (1.81). So, we must show that δimδjj −δjmδij = 2δim. (1.84) Perhaps it would be a little more clear if we restored the summation symbols, giving X j δimδjj − X j δjmδij = 2δim. (1.85) The first sum is over j, so we may pull out the δim term, as it is independent of j. Using the properties of the Kronecker delta, we see that P j δjj = 3. So the first term is just 3δim. The second term is just δim, using the substitution property of the Kronecker delta. Hence the two sides are equal, as desired. Example 1 The following computation illustrates the utility of the formulae 25 (1.80)-(1.83). The objective is to prove the vector identity A × (B × C) = B(A · C) −C(A · B), (1.86) the so-called “BAC minus CAB rule”. We proceed as follows (summation convention in force): (A × (B × C))i = εijkAj(B × C)k (1.87) = εijkAjεklmBlCm (1.88) = (δilδjm −δimδjl)AjBlCm (1.89) = AjBiCj −AjBjCi (1.90) = (B(A · C) −C(A · B))i. (1.91) and we are done. This was a little fast, perhaps. So let us fill in a few of the steps. Observe that we choose to prove that the left and right hand sides of (1.86) are the same by proving their components are the same. This makes sense according to the way in which we introduced cross products via epsilon symbols. Equation (1.87) is obtained from (1.78), leaving B × C temporarily unex-panded. In (1.88) we apply (1.78) again, this time to B × C. Notice that we had to choose different dummy indices for the second epsilon expansion, otherwise we would have gotten into trouble, as we have emphasized previously. In (1.89) we did a few things all at once. First, we commuted the Aj and εklm terms. We can always do this because, for any value of the indices, these two quantities are just numbers, and numbers always commute. Second, we permuted some indices in our head in order to bring the index structure of the epsilon product into the form exhibited in (1.81). In particular, we substituted εlmk for εklm, which we can do by virtue of the symmetry properties of the epsilon symbol. Third, we applied 26 (1.81) to the product εijkεlmk. To get (1.90) we used the substitution property of the Kronecker delta. Finally, we recognized that AjCj is just A · C, and AjBj is A · B. The equality of (1.90) and (1.91) is precisely the definition of the ith component of the right hand side of (1.86). The result then follows because two vectors are equal if and only if their components are equal. 1.10 Determinants and Epsilon Symbols Given the close connection between cross products and determinants, it should come as no surprise that there are formulas relating determinants to epsilon symbols. Consider again the triple product (1.73). Using the epsilon symbol we can write A · (B × C) = X k Ak(B × C)k = X ijk εkijAkBiCj = A1 A2 A3 B1 B2 B3 C1 C2 C3 (1.92) Thus, det A = A11 A12 A13 A21 A22 A23 A31 A32 A33 = X ijk εijkA1iA2jA3k. (1.93) We could just as well multiply on the left by ε123, because ε123 = 1, in which case (1.93) would read ε123 det A = X ijk εijkA1iA2jA3k. (1.94) As the determinant changes sign whenever any two rows of the matrix are 27 switched, it follows that the right hand side has exactly the same symmetries as the left hand side under any interchange of 1, 2, and 3. Hence we may write εmnp det A = X ijk εijkAmiAnjApk. (1.95) Again, using our summation convention, this would be written εmnp det A = εijkAmiAnjApk. (1.96) Finally, we can transform (1.96) into a more symmetric form by using prop-erty (1.83). Multiply both sides by εmnp, sum over m, n, and p, and divide by 3! to get 7 det A = 1 3!εmnpεijkAmiAnjApk. (1.97) 1.11 Vector Algebra III: Tensor Product So, what is a tensor anyway? There are many different ways to introduce the notion of a tensor, varying from what some mathematicians amusingly call “low brow” to “high brow”. In keeping with the discursive nature of these notes, I will restrict the discussion to the “low brow” approach, reserving a more advanced treatment for later work. To start, we define a new kind of vector product called the tensor prod-uct, usually denoted by the symbol ⊗. Given two vectors A and B, we can form their tensor product A ⊗B. A ⊗B is called a tensor of order 2. 8 The tensor product is not generally commutative—order matters. So 7Because we have restricted attention to the three dimensional epsilon symbol, the formulae in this section work only for 3 × 3 matrices. One can write formulae for higher determinants using higher dimensional epsilon symbols, but we shall not do so here. 8N.B. Many people use the word ‘rank’ interchangeably with the word ‘order’, so that A ⊗B is then called a tensor of rank 2. The problem with this terminology is that it 28 B ⊗A is generally different from A ⊗B. We can form higher order tensors by repeating this procedure. So, for example, given another vector C, we have A ⊗B ⊗C, a third order tensor. (The tensor product is associative, so we need not worry about parentheses.) Order zero tensors are just scalars, while order one tensors are just vectors. In older books, the tensor A ⊗B is sometimes called a dyadic product (of the vectors A and B), and is written AB. That is, the tensor product symbol ⊗is simply dropped. This generally leads to no confusion, as the only way to understand the proximate juxtaposition of two vectors is as a tensor product. We will use either notation as it suits us. The set of all tensors forms a mathematical object called a graded alge-bra. This just means that you can add and multiply as usual. For example, if α and β are numbers and S and T are both tensors of order s, then αT +βS is a tensor of order s. If R is a tensor of order r then R ⊗S is a tensor of order r + s. In addition, scalars pull through tensor products T ⊗(αS) = (αT ) ⊗S = α(T ⊗S), (1.98) and tensor products are distributive over addition: R ⊗(S + T ) = R ⊗S + R ⊗T . (1.99) Just as a vector has components in some basis, so does a tensor. Let conflicts with another standard usage. In linear algebra the rank of a matrix is the number of linearly independent rows (or columns). If we consider the components of the tensor A ⊗B, namely AiBj, to be the components of a matrix, then this matrix only has rank 1! (The rows are all multiples of each other.) To avoid this problem, one usually says that a tensor of the form A1 ⊗A2 ⊗· · · has rank 1. Any tensor is a sum of rank 1 tensors, and we say that the rank of the tensor is the minimum number of rank 1 tensors needed to write it as such a sum. 29 ˆ e1, ˆ e2, ˆ e3 be the canonical basis of R3. Then the canonical basis for the vector space R3 ⊗R3 of order 2 tensors on R3 is given by the set ˆ ei ⊗ˆ ej, as i and j run from 1 to 3. Written out in full, these basis elements are ˆ e1 ⊗ˆ e1 ˆ e1 ⊗ˆ e2 ˆ e1 ⊗ˆ e3 ˆ e2 ⊗ˆ e1 ˆ e2 ⊗ˆ e2 ˆ e2 ⊗ˆ e3 ˆ e3 ⊗ˆ e1 ˆ e3 ⊗ˆ e2 ˆ e3 ⊗ˆ e3 . (1.100) The most general second order tensor on R3 is a linear combination of these basis tensors: T = X ij Tijˆ ei ⊗ˆ ej. (1.101) Almost always the basis is understood and fixed throughout. For this reason, tensors are often identified with their components. So, for example, we often do not distinguish between the vector A and its components Ai. Similarly, we often call Tij a tensor, when it is really just the components of a tensor in some basis. This terminology drives mathematicians crazy, but it works for most physicists. This is the reason why we have already referred to the Kronecker delta δij and the epsilon tensor εijk as ‘tensors’. As an example, let us find the components of the tensor A⊗B. We have A ⊗B = ( X i Aiˆ ei) ⊗( X j Bjˆ ej) (1.102) = X ij AiBjˆ ei ⊗ˆ ej, (1.103) so the components of A ⊗B are just AiBj. This works in general, so that, for example, the components of A ⊗B ⊗C are just AiBjCk. It is perhaps worth observing that a tensor of the form A ⊗B for some 30 H ˆ n x σ(x) Figure 4: Reflection through a plane vectors A and B is not the most general order two tensor. The reason is that the most general order two tensor has 9 independent components, whereas AiBj has only 6 independent components (three from each vector). 1.12 Problems 1) The Cauchy-Schwarz inequality states that, for any two vectors u and v in Rn: (u · v)2 ≤(u · u)(v · v), with equality holding if and only if u = λv for some λ ∈R. Prove the Cauchy-Schwarz inequality. [Hint: Use angles.] 2) Show that the equation a · r = a2 defines a two dimensional plane in three dimensional space, where a is the minimal length vector from the origin to the plane. [Hint: A plane is the translate of the linear span of two vectors. The Cauchy-Schwarz inequality may come in handy.] 3) A reflection σ through a plane H with unit normal vector ˆ n is a linear map satisfying (i) σ(x) = x, for x ∈H, and (ii) σ(ˆ n) = −ˆ n. (See Figure 4.) Find an expression for σ(x) in terms of x, ˆ n, and the dot product. Verify that σ2 = 1, as befits a reflection. 4) The volume of a tetrahedron is V = bh/3, where b is the area of a base and h is the height (distance from base to apex). Consider a tetrahedron with one vertex at the origin and the other three vertices at positions A, B and 31 C. Show that we can write V = 1 6A · (B × C). This demonstrates that the volume of such a tetrahedron is one sixth of the volume of the parallelepiped defined by the vectors A, B and C. 5) Prove Equation (1.80) by the following method. First, show that both sides have the same symmetry properties by showing that both sides are anti-symmetric under the interchange of a pair of {ijk} or a pair of {mnp}, and that both sides are left invariant if you exchange the sets {ijk} and {mnp}. Next, show that both sides agree when (i, j, k, m, n, p) = (1, 2, 3, 1, 2, 3). 6) Using index notation, prove Lagrange’s identity: (A × B) · (C × D) = (A · C)(B · D) −(A · D)(B · C). 7) For any two matrices A and B, show that (AB)T = BT AT and (AB)−1 = B−1A−1. [Hint: You may wish to use indices for the first equation, but for the second use the uniqueness of the inverse.] 8) Let R(θ) and R(ψ) be planar rotations through angles θ and ψ, respectively. By explicitly multiplying the matrices together, show that R(θ)R(ψ) = R(θ + ψ). [Remark: This makes sense physically, because it says that if we first rotate a vector through an angle ψ and then rotate it through an angle θ, that the result is the same as if we simply rotated it through a total angle of θ + ψ. Incidentally, this shows that planar rotations commute, which means that we get the same result whether we first rotate through ψ then θ, or first rotate through θ then ψ, as one would expect. This is no longer true for rotations in three and higher dimensions where the order of rotations matters, as you can see by performing successive rotations about different axes, first in one order and then in the opposite order.] 2 Vector Calculus I It turns out that the laws of physics are most naturally expressed in terms of tensor fields, which are simply fields of tensors. We have already seen many 32 r(t) v(t) ϕ Figure 5: An observer moving along a curve through a scalar field examples of this in the case of scalar fields and vector fields, and tensors are just a natural generalization. But in physics we are not just interested in how things are, we are also interested in how things change. For that we need to introduce the language of change, namely calculus. This leads us to the topic of tensor calculus. However, we will restrict ourselves here to tensor fields of order 0 and 1 (scalar fields and vector fields) and leave the general case for another day. 2.1 Fields A scalar field ϕ(r) is a field of scalars. This means that, to every point r we associate a scalar quantity ϕ(r). A physical example is the electrostatic potential. Another example is the temperature. A vector field A(r) is a field of vectors. This means that, to every point r we associate a vector A(r). A physical example is the electric field. Another example is the gravitational field. 33 2.2 The Gradient Consider an observer moving through space along a parameterized curve r(t) in the presence of a scalar field ϕ(r). According to the observer, how fast is ϕ changing? For convenience we work in Cartesian coordinates, so that the position of the observer at time t is given by r(t) = (x(t), y(t), z(t)). (2.1) At this instant the observer measures the value ϕ(t) := ϕ(r(t)) = ϕ(x(t), y(t), z(t)) (2.2) for the scalar field ϕ. Thus dϕ/dt measures the rate of change of ϕ along the curve. By the chain rule this is dϕ(t) dt = ∂ϕ ∂x dx dt + ∂ϕ ∂y dy dt + ∂ϕ ∂z dz dt = dx dt , dy dt , dz dt  · ∂ϕ ∂x, ∂ϕ ∂y , ∂ϕ ∂y  = v · ∇ϕ, (2.3) where v(t) = dr dt = dx dt , dy dt , dz dt  (2.4) is the velocity vector of the particle. The quantity ∇is called the gradient operator. We interpret Equation (2.3) by saying that the rate of change of ϕ in the direction v is dϕ dt = v · (∇ϕ) = (v · ∇)ϕ. (2.5) 34 ϕ = c1 ϕ = c2 ϕ = c3 Figure 6: Some level surfaces of a scalar field ϕ The latter expression is called the directional derivative of ϕ in the direc-tion v. We can understand the gradient operator in another way. Definition. A level surface (or equipotential surface) of a scalar field ϕ(r) is the locus of points r for which ϕ(r) is constant. (See Figure 6.) With this definition we make the following Claim 2.1. ∇ϕ is a vector field that points everywhere orthogonal to the level surfaces of ϕ and in the direction of fastest increase of ϕ. Proof. Pick a point in a level surface and suppose that ∇ϕ fails to be or-thogonal to the level surface at that point. Consider moving along a curve lying within the level surface (Figure 7). Then v = dr/dt is tangent to the surface, which implies that dϕ dt = v · ∇ϕ ̸= 0, a contradiction. Also, dϕ/dt is positive when moving from low ϕ to high ϕ, so ∇ϕ must point in the direction of increase of ϕ. 35 curve in level surface hypothetical direction of ∇ϕ tangent vector to curve Figure 7: Gradients and level surfaces Example 2 Let T = x2 −y2 + z2 −2xy + 2yz + 273. Suppose you are at the point (3, 1, 4). Which way does it feel hottest? What is the rate of increase of the temperature in the direction (1, 1, 1) at this point? We have ∇T (3,14) = (2(x −y), −2y −2x + 2z, 2z + 2y) (3,1,4) = (4, 0, 10). Since dT dt = v · ∇ϕ, the rate of increase in the temperature depends on the speed of the observer. If we want to compute the rate of temperature increase independent of the speed of the observer we must normalize the direction vector. This gives, for the rate of increase 1 √ 3(1, 1, 1) · (4, 0, 10) = 8.1. If temperature were measured in Kelvins and distance in meters this last answer would be in K/m. 36 P Q hyperbola ∇d, ∇h ∇d ∇h v level surfaces of d Figure 8: A hyperbola meets some level surfaces of d 2.3 Lagrange Multipliers One important application of the gradient operator is to constrained opti-mization problems. Let’s consider a simple example first. We would like to find the point (or points) on the hyperbola xy = 4 closest to the origin. (See Figure 8.) Of course, this is geometrically obvious, but we will use the method of Lagrange multipliers to illustrate the general method, which is applicable in more involved cases. The distance from any point (x, y) to the origin is given by the function d(x, y) = p x2 + y2. Define h(x, y) = xy. Then we want the solution (or solutions) to the problem minimize d(x, y) subject to the constraint h(x, y) = 4. 37 d(x, y) is called the objective function, while h(x, y) is called the constraint function. We can interpret the problem geometrically as follows. The level surfaces of d are circles about the origin, and the direction of fastest increase in d is parallel to ∇d, which is orthogonal to the level surfaces. Now imagine walking along the hyperbola. At a point Q on the hyperbola where ∇h is not parallel to ∇d, v has a component parallel to ∇d, 9 so we can continue to walk in the direction of the vector v and cause the value of d to decrease. Hence d was not a minimum at Q. Only when ∇h and ∇d are parallel (at P) do we reach the minimum of d subject to the constraint. Of course, we have to require h = 4 as well (otherwise we might be on some other level surface of h by accident). Hence, the minimum of d subject to the constraint is achieved at a point r0 = (x0, y0), where ∇d|r0 = λ∇h|r0 (2.6) and h(r0) = 4, (2.7) and where λ is some unknown constant, called a Lagrange multiplier. At this point we invoke a small simplification, and change our objective function to f(x, y) = [d(x, y)]2 = x2 + y2, because it is easy to see that d(x, y) and f(x, y) are minimized at the same points. So, we want to solve the equations ∇f = λ∇h (2.8) and h = 4. (2.9) 9Equivalently, v is not tangent to the level surfaces of d. 38 In our example, these equations become ∂f ∂x = λ∂h ∂x ⇒2x = λy (2.10) ∂f ∂y = λ∂h ∂y ⇒2y = λx (2.11) h = 4 ⇒xy = 4. (2.12) Solving them (and discarding the unphysical complex valued solution) yields (x, y) = (2, 2) and (x, y) = (−2, −2). Hardly a surprise. Remark. The method of Lagrange multipliers does not tell you whether you have a maximum, minimum, or saddle point for your objective function, so you need to check this by other means. In higher dimensions the mathematics is similar—we just add variables. If we have more than one constraint, though, we need to impose more condi-tions. Suppose we have one objective function f, but m constraints, h1 = c1, h2 = c2, . . . , hm = cm. If ∇f had a component tangent to every one of the constraint surfaces at some point r0, then we could move a bit in that direction and change f while maintaining all the constraints. But then r0 would not be an extreme point of f. So ∇f must be orthogonal to at least some (possibly all) of the constraint surfaces at that point. This means that ∇f must be a linear combination of the gradient vectors ∇hi. Together with the constraint equations themselves, the conditions now read ∇f = m X i=1 λi∇hi (2.13) hi = ci (i = 1, . . . , m). (2.14) 39 Remark. Observe that this is the correct number of equations. If we are in Rn, there are n variables and the gradient operator is a vector of length n, so (2.13) gives n equations. (2.14) gives m more equations, for a total of m + n, and this is precisely the number of unknowns (namely, x1, x2, . . . , xn, λ1, λ2, . . . , λm). Remark. The desired equations can be packaged more neatly using the La-grangian function for the problem. In the preceding example, the Lagrangian function is F = f − X i λihi. (2.15) If we define the augmented gradient operator to be the vector operator given by ∇′ =  ∂ ∂x1, ∂ ∂x2, . . . , ∂ ∂xn, ∂ ∂λ1 , ∂ ∂λ2 , . . . , ∂ ∂λm  , then Equations (2.13) and (2.14) are equivalent to the single equation ∇′F = 0. (2.16) This is sometimes a convenient way to remember the optimization equations. Remark. Let’s give a physicists’ proof of the correctness of the method of Lagrange multipliers for the simple case of one constraint. The general case follows similarly. We want to extremize f(r) subject to the constraint h(r) = c. Let r(t) be a curve lying in the level surface Σ := {r|h(r) = c}, and set r(0) = r0. Then dr(t)/dt|t=0 is tangent to Σ at r0. Now restrict f to Σ and suppose f(r0) is an extremum. Then f(r(t)) is extremized at t = 0. But this implies that 0 = d f(r(t)) dt t=0 = dr(t) dt t=0 · ∇f|r0 . (2.17) 40 Hence ∇f|r0 is orthogonal to Σ, so ∇f|r0 and ∇h|r0 are proportional. 2.4 The Divergence Definition. In Cartesian coordinates, the divergence of a vector field A = (Ax, Ay, Az) is the scalar field given by ∇· A = ∂Ax ∂x + ∂Ay ∂y + ∂Az ∂z . (2.18) Example 3 If A = (3xz, 2y2x, 4xy) then ∇· A = 3z + 4xy. N.B. The gradient operator takes scalar fields to vector fields, while the divergence operator takes vector fields to scalar fields. Try not to confuse the two. 2.5 The Laplacian Definition. The Laplacian of a scalar field ϕ is the divergence of the gradient of ϕ. In Cartesian coordinates we have ∇2ϕ = ∇· ∇ϕ = ∂2ϕ ∂x2 + ∂2ϕ ∂y2 + ∂2ϕ ∂z2 . (2.19) 41 Example 4 If ϕ = x2y2z3 + xz4 then ∇2ϕ = 2y2z3 + 2x2z3 + 6x2y2z + 12xz2. 2.6 The Curl Definition. The curl of a vector field A is another vector field given by ∇× A = ˆ ı ˆ  ˆ k ∂x ∂y ∂z Ax Ay Az (2.20) = ˆ ı (∂yAz −∂zAy) + cyclic. (2.21) In this definition (and in all that follows) we employ the notation ∂x := ∂ ∂x, ∂y := ∂ ∂y, and ∂z := ∂ ∂z. (2.22) Example 5 Let A = (3xz, 2xy2, 4xy). Then ∇× A = ˆ ı(4x −0) + ˆ (3x −4y) + ˆ k(2y2 −0) = (4x, 3x −4y, 2y2). Definition. A is solenoidal (or divergence-free) if ∇· A = 0. A is irrotational (or curl-free) if ∇× A = 0. Claim 2.2. DCG≡0. i.e., i) ∇· (∇× A) ≡0. 42 ii) ∇× ∇ϕ ≡0. Proof. We illustrate the proof for (i). The proof of (ii) is similar. We have ∇· (∇× A) = ∇· (∂yAz −∂zAy, ∂zAx −∂xAz, ∂xAy −∂yAx) = ∂x(∂yAz −∂zAy) + ∂y(∂zAx −∂xAz) + ∂z(∂xAy −∂yAx) = ∂x∂yAz −∂x∂zAy + ∂y∂zAx −∂y∂xAz + ∂z∂xAy −∂z∂yAx = 0, where we used the crucial fact that mixed partial derivatives commute ∂2f ∂xi∂xj = ∂2f ∂xj∂xi , (2.23) for any twice differentiable function. 2.7 Vector Calculus with Indices Remark. In this section we employ the summation convention without comment. Recall that the gradient operator ∇in Cartesian coordinates is the vector differential operator given by ∇:=  ∂ ∂x, ∂ ∂y, ∂ ∂z  = (∂x, ∂y, ∂z). (2.24) It follows that the ith component of the gradient of a scalar field ϕ is just (∇ϕ)i = ∂iϕ. (2.25) 43 Similarly, the divergence of a vector field A is written ∇· A = ∂iAi. (2.26) The Laplacian operator may be viewed as the divergence of the gradient, so (2.25) and (2.26) together yield the Laplacian of a scalar field ϕ: ∇2ϕ = ∂i∂iϕ. (2.27) Finally, the curl becomes (∇× A)i = εijk∂jAk. (2.28) Once again, casting formulae into index notation greatly simplifies some proofs. As a simple example we demonstrate the fact that the divergence of a curl is always zero: ∇· (∇× A) = ∂i(εijk∂jAk) = εijk∂i∂jAk = 0. (2.29) (Compare this to the proof given in Section 2.6.) The first equality is true by definition, while the second follows from the fact that the epsilon tensor is constant (0, 1, or −1), so it pulls out of the derivative. We say the last equality holds “by inspection”, because (i) mixed partial derivatives commute (cf., (2.23)) so ∂i∂jAk is symmetric under the interchange of i and j, and (ii) the contracted product of a symmetric and an antisymmetric tensor is identically zero. The proof of (ii) goes as follows. Let Aij be a symmetric tensor and Bij 44 be an antisymmetric tensor. This means that Aij = Aji (2.30) and Bij = −Bji, (2.31) for all pairs i and j. Then AijBij = AjiBij (using (2.30)) (2.32) = −AjiBji (using (2.31)) (2.33) = −AijBij (switching dummy indices i and j) (2.34) = 0. (2.35) Be sure you understand each step in the sequence above. The tricky part is switching the dummy indices in step three. We can always do this in a sum, provided we are careful to change all the indices of the same kind with the same letter. For example, given two vectors C and D, their dot product can be written as either CiDi or CjDj, because both expressions are equal to C1D1 + C2D2 + C3D3. It does not matter whether we use i as our dummy index or whether we use j—the sum is the same. But note that it would not be true if the indices were not summed. The same argument shows that εijk∂i∂jAk = 0, because the epsilon tensor is antisymetric under the interchange of i and j, while the partial derivatives are symmetric under the same interchange. (The k index just goes along for the ride; alternatively, the expression vanishes for each of k = 1, 2, 3, so the sum over k also vanishes.) Let us do one more vector calculus identity for the road. This time, we 45 prove the identity: ∇× (∇× A) = ∇(∇· A) −∇2A. (2.36) Consider what is involved in proving this the old-fashioned way. We first have to expand the curl of A, and then take the curl of that. So we the first few steps of a demonstration along these lines would look like this: ∇× (∇× A) = ∇× (∂yAz −∂zAy, ∂zAx −∂xAz, ∂xAy −∂yAx) = (∂y (∂xAy −∂yAx) −∂z (∂zAx −∂xAz) , . . . ) = . . . . We would then have to do all the derivatives and collect terms to show that we get the right hand side of (2.36). You can do it this way, but it is unpleasant. A more elegant proof using index notation proceeds as follows: [∇× (∇× A)]i = εijk∂j(∇× A)k (using (2.28)) = εijk∂j(εklm∂lAm) (using (2.28) again) = εijkεklm∂j(δlAm) (as εklm is constant) = (δilδjm −δimδjl)∂j∂lAm (from (1.81)) = ∂i∂jAj −∂j∂jAi (substitution property of δ) = ∂i(∇· A) −∇2Ai, ((2.26) and (2.27)) and we are finished. (You may want to compare this with the proof of (1.86).) 46 2.8 Problems 1) Write down equations for the tangent plane and normal line to the surface x2y + y2z + z2x + 1 = 0 at the point (1, 2, −1). 2) Old postal regulations dictated that the maximum size of a rectangular box that could be sent parcel post was 108′′, measured as length plus girth. (If the box length is z, say, then the girth is 2x + 2y, where x and y are the lengths of the other sides.) What is the maximum volume of such a package? 3) Either directly or using index methods, show that, for any scalar field ϕ and vector field A, (a) ∇· (ϕA) = ∇ϕ · A + ϕ∇· A, (b) ∇× (ϕA) = ∇ϕ × A + ϕ∇× A, and (c) ∇2(ϕψ) = (∇2ϕ)ψ + 2∇ϕ · ∇ψ + ϕ∇2ψ. 4) Show that, for r ̸= 0, (a) ∇· ˆ r = 2/r, and (b) ∇× ˆ r = 0. 5) A function f(r) = f(x, y, z) is homogeneous of degree k if f(ar) = akf(r) (2.37) for any nonzero constant a. Prove Euler’s Theorem, which states that, for any homogeneous function f of degree k, (r · ∇)f = kf. (2.38) [Hint: Differentiate both sides of (2.37) with respect to a and use the chain rule, then evaluate at a = 1.] 6) A function ϕ satisfying ∇2ϕ = 0 is called harmonic. (a) Using Cartesian coordinates, show that ϕ = 1/r is harmonic, where r = (x2 + y2 + z2)1/2 ̸= 0. 47 (b) Let α = (α1, α2, α3) be a vector of nonnegative integers, and define |α| = α1 + α2 + α3. Let ∂α be the differential operator ∂α1 x ∂α2 y ∂α3 z . Prove that any function of the form ϕ = r2|α|+1∂α(1/r) is harmonic. [Hint: Use vector calculus identities to expand out ∇2(rnf) where f := ∂α(1/r) and use Euler’s theorem and the fact that mixed partials commute.] 7) Using index methods, prove the following vector calculus identities: (a) ∇· (A × B) = B · (∇× A) −A · (∇× B). (b) ∇(A · B) = (B · ∇)A + (A · ∇)B + B × (∇× A) + A × (∇× B). 3 Vector Calculus II: Other Coordinate Sys-tems 3.1 Change of Variables from Cartesian to Spherical Polar So far we have dealt exclusively with Cartesian coordinates. But for many problems it is more convenient to analyse the problem using a different coordi-nate system. Here we see what is involved in translating the vector operators to spherical coordinates, leaving the task for other coordinate systems to the reader. First we recall the relationship between Cartesian coordinates and spher-ical polar coordinates (see Figure 9): x = r sin θ cos φ r = (x2 + y2 + z2)1/2 y = r sin θ sin φ θ = cos−1 z (x2 + y2 + z2)1/2 (3.1) z = r cos θ φ = tan−1 y x. 48 y z x r ˆ r ˆ θ ˆ φ θ φ Figure 9: Spherical polar coordinates and corresponding unit vectors 3.2 Vector Fields and Derivations Next we need the equations relating the Cartesian unit vectors ˆ x, ˆ y, and ˆ z, to the spherical polar unit vectors ˆ r, ˆ θ, and ˆ φ. To do this we introduce a new idea, namely the idea of vector field as derivation. We have already encountered the basic idea above. Suppose you walk along a curve r(t) in the presence of a scalar field ϕ. Then the rate of change of ϕ along the curve is dϕ(t) dt = (v · ∇)ϕ. (3.2) On the left side of this expression we have the derivative of ϕ along the curve, while on the right side we have the directional derivative of ϕ in a direction tangent to the curve. We can dispense with ϕ altogether, and simply write d dt = v · ∇. (3.3) 49 That is, d/dt, the derivative with respect to t, the parameter along the curve, is the same thing as directional derivative in the v direction. This allows us to identify the derivation d/dt and the vector field v. 10 To every vector field there is a derivation, namely the directional derivative in the direction of the vector field, and vice versa, so mathematicians often identify the two concepts. For example, let us walk along the x axis with some speed v. Then d dt = dx dt ∂ ∂x = v ∂ ∂x, (3.4) so (3.3) becomes v ∂ ∂x = vˆ x · ∇. (3.5) Dividing both sides by v gives ∂ ∂x = ˆ x · ∇, (3.6) which is consistent with our previous results. Hence we write ∂ ∂x ← →ˆ x (3.7) to indicate that the derivation on the left corresponds to the vector field on the right. Clearly, an analogous result holds for ˆ y and ˆ z. Note also that (3.6) is an equality whereas (3.7) is an association. Keep this distinction in mind to avoid confusion. Suppose instead that we were to move along a longitude in the direction 10A derivation D is a linear operator obeying the Leibniz rule. That is, D(φ + ψ) = Dφ + Dψ, and D(φψ) = (Dφ)ψ + φDψ. 50 of increasing θ. Then we would have d dt = dθ dt ∂ ∂θ, (3.8) and (3.3) would become dθ dt ∂ ∂θ = vˆ θ · ∇. (3.9) But now dθ/dt is not the speed. Instead, v = rdθ dt , (3.10) so (3.9) yields 1 r ∂ ∂θ = ˆ θ · ∇. (3.11) This allows us to identify 1 r ∂ ∂θ ← →ˆ θ. (3.12) We can avoid reference to the speed of the observer by the following method. From the chain rule ∂ ∂r = ∂x ∂r  ∂ ∂x + ∂y ∂r  ∂ ∂y + ∂z ∂r  ∂ ∂z ∂ ∂θ = ∂x ∂θ  ∂ ∂x + ∂y ∂θ  ∂ ∂y + ∂z ∂θ  ∂ ∂z (3.13) ∂ ∂φ = ∂x ∂φ  ∂ ∂x + ∂y ∂φ  ∂ ∂y + ∂z ∂φ  ∂ ∂z. 51 Using (3.1) gives ∂x ∂r = sin θ cos φ ∂y ∂r = sin θ sin φ ∂z ∂r = cos θ ∂x ∂θ = r cos θ cos φ ∂y ∂θ = r cos θ sin φ ∂z ∂θ = −r sin θ (3.14) ∂x ∂φ = −r sin θ sin φ ∂y ∂φ = r sin θ cos φ ∂z ∂φ = 0, so ∂ ∂r = sin θ cos φ ∂ ∂x + sin θ sin φ ∂ ∂y + cos θ ∂ ∂z (3.15) ∂ ∂θ = r cos θ cos φ ∂ ∂x +r cos θ sin φ ∂ ∂y −r sin θ ∂ ∂z (3.16) ∂ ∂φ = −r sin θ sin φ ∂ ∂x +r sin θ cos φ ∂ ∂y. (3.17) Now we just identify the derivations on the left with multiples of the corre-sponding unit vectors. For example, if we write ∂ ∂θ ← →αˆ θ, (3.18) then from (3.16) we get αˆ θ = r cos θ cos φ ˆ x + r cos θ sin φ ˆ y −r sin θ ˆ z. (3.19) The vector on the right side of (3.19) has length r, which means that α = r, and we recover (3.12) from (3.18). Furthermore, we also conclude that ˆ θ = cos θ cos φ ˆ x + cos θ sin φ ˆ y −sin θ ˆ z. (3.20) 52 Continuing in this way gives ˆ r ← →∂ ∂r (3.21) ˆ θ ← →1 r ∂ ∂θ (3.22) ˆ φ ← → 1 r sin θ ∂ ∂φ (3.23) and ˆ r = sin θ cos φ ˆ x + sin θ sin φ ˆ y + cos θ ˆ z (3.24) ˆ θ = cos θ cos φ ˆ x + cos θ sin φ ˆ y −sin θ ˆ z (3.25) ˆ φ = −sin φ ˆ x + cos φ ˆ y. (3.26) If desired, we could now use (3.1) to express the above equations in terms of Cartesian coordinates. 3.3 Derivatives of Unit Vectors The reason why vector calculus is simpler in Cartesian coordinates than in any other coordinate system is that the unit vectors ˆ x, ˆ y and ˆ z are constant. This means that, no matter where you are in space, these vectors never change length or direction. But it is immediately apparent from (3.24)-(3.26) (see also Figure 9) that the spherical polar unit vectors ˆ r, ˆ θ, and ˆ φ vary in direction (though not in length) as we move around. It is this difference that makes vector calculus in spherical coordinates a bit of a mess. Thus, we must compute how the spherical polar unit vectors change as 53 we move around. A little calculation yields ∂ˆ r ∂r = 0 ∂ˆ r ∂θ = ˆ θ ∂ˆ r ∂φ = sin θ ˆ φ ∂ˆ θ ∂r = 0 ∂ˆ θ ∂θ = −ˆ r ∂ˆ θ ∂φ = cos θ ˆ φ (3.27) ∂ˆ φ ∂r = 0 ∂ˆ φ ∂θ = 0 ∂ˆ φ ∂φ = −(sin θ ˆ r + cos θ ˆ θ) 3.4 Vector Components in a Non-Cartesian Basis We began these notes by observing that the Cartesian components of a vector can be found by computing inner products. For example, the x component of a vector A is just ˆ x · A. Similarly, the spherical polar components of the vector A are defined by A = Arˆ r + Aθ ˆ θ + Aφ ˆ φ. (3.28) Equivalently, A = ˆ r(ˆ r · A) + ˆ θ(ˆ θ · A) + ˆ φ( ˆ φ · A). (3.29) 3.5 Vector Operators in Spherical Coordinates We are finally ready to find expressions for the gradient, divergence, curl, and Laplacian in spherical polar coordinates. We begin with the gradient operator. According to (3.29) we have ∇= ˆ r(ˆ r · ∇) + ˆ θ(ˆ θ · ∇) + ˆ φ( ˆ φ · ∇). (3.30) In this formula the unit vectors are followed by the derivations in the direction of the unit vectors. But the latter are precisely what we computed in (3.21)-54 (3.23), so we get ∇= ˆ r∂r + ˆ θ1 r∂θ + ˆ φ 1 r sin θ∂φ. (3.31) Example 6 If ϕ = r2 sin2 θ sin φ then ∇ϕ = 2r sin2 θ sin φ ˆ r+2r sin θ cos θ sin φ ˆ θ+ r sin θ cos φ ˆ φ. The divergence is a bit trickier. Now we have ∇· A = (ˆ r∂r + ˆ θ1 r∂θ + ˆ φ 1 r sin θ∂φ) · (Arˆ r + Aθ ˆ θ + Aφ ˆ φ). (3.32) To compute this expression we must act first with the derivatives, and then take the dot products. This gives ∇· A = ˆ r · ∂r(Arˆ r + Aθ ˆ θ + Aφ ˆ φ) + 1 r ˆ θ · ∂θ(Arˆ r + Aθ ˆ θ + Aφ ˆ φ) + 1 r sin θ ˆ φ · ∂φ(Arˆ r + Aθ ˆ θ + Aφ ˆ φ). (3.33) With a little help from (3.27) we get ∂r(Arˆ r + Aθ ˆ θ + Aφ ˆ φ) = (∂rAr)ˆ r + Ar(∂rˆ r) + (∂rAθ)ˆ θ + Aθ(∂r ˆ θ) + (∂rAφ) ˆ φ + Aφ(∂r ˆ φ) = (∂rAr)ˆ r + (∂rAθ)ˆ θ + (∂rAφ) ˆ φ, (3.34) ∂θ(Arˆ r + Aθ ˆ θ + Aφ ˆ φ) = (∂θAr)ˆ r + Ar(∂θˆ r) + (∂θAθ)ˆ θ + Aθ(∂θ ˆ θ) + (∂θAφ) ˆ φ + Aφ(∂θ ˆ φ) = (∂θAr)ˆ r + Ar ˆ θ + (∂θAθ)ˆ θ −Aθˆ r + (∂θAφ) ˆ φ, (3.35) 55 and ∂φ(Arˆ r + Aθ ˆ θ + Aφ ˆ φ) = (∂φAr)ˆ r + Ar(∂φˆ r) + (∂φAθ)ˆ θ + Aθ(∂φ ˆ θ) + (∂φAφ) ˆ φ + Aφ(∂φ ˆ φ) = (∂φAr)ˆ r + Ar(sin θ ˆ φ) + (∂φAθ)ˆ θ + Aθ(cos θ ˆ φ) + (∂φAφ) ˆ φ −Aφ(sin θ ˆ r + cos θ ˆ θ). (3.36) Taking the dot products and combining terms gives ∇· A = ∂rAr + 1 r(Ar + ∂θAθ) + 1 r sin θ(Ar sin θ + Aθ cos θ + ∂φAφ) = ∂Ar ∂r + Ar r  + 1 r ∂Aθ ∂θ + Aθ cos θ r sin θ  + 1 r sin θ ∂Aφ ∂φ = 1 r2 ∂ ∂r r2Ar  + 1 r sin θ ∂ ∂θ(sin θAθ) + 1 r sin θ ∂Aφ ∂φ . (3.37) Well, that was fun. Similar computations, which are left to the reader :-), yield the curl: ∇× A = 1 r sin θ  ∂ ∂θ(sin θAφ) −∂Aθ ∂φ  ˆ r + 1 r  1 sin θ ∂Ar ∂φ −∂ ∂r(rAφ)  ˆ θ + 1 r  ∂ ∂r(rAθ) −∂Ar ∂θ  ˆ φ, (3.38) and the Laplacian ∇2 = 1 r2 ∂ ∂r  r2 ∂ ∂r  + 1 r2 sin θ ∂ ∂θ  sin θ ∂ ∂θ  + 1 r2 sin2 θ ∂2 ∂φ2. (3.39) Example 7 Let A = r2 sin θ ˆ r + 4r2 cos θ ˆ θ + r2 tan θ ˆ φ. Then ∇· A = 56 4r cos2 θ/ sin θ, and ∇× A = −r ˆ r −3r tan θ ˆ θ + 11r cos θ ˆ φ. 3.6 Problems 1) The transformation relating Cartesian and cylindrical coordinates is x = ρ cos θ ρ = (x2 + y2)1/2 y = ρ sin θ θ = tan−1 y x (3.40) z = z z = z Using the methods of this section, show that the gradient operator in cylin-drical coordinates is given by ∇= ˆ ρ∂ρ + 1 ρ ˆ θ∂θ + ˆ z∂z. (3.41) 4 Vector Calculus III: Integration Integration is the flip side of differentiation—you cannot have one without the other. We begin with line integrals, then continue on to surface and volume integrals and the relations between them. 4.1 Line Integrals There are many different types of line integrals, but the most important type arises as the inverse of the gradient function. Given a parameterized curve γ(t) and a vector field A, the line 11 integral of A along the curve γ(t) is usually written Z γ A · dℓ, (4.1) 11The word ‘line’ is a bit of a misnomer in this context, because we really mean a ‘curve’ integral, but we will follow standard terminology. 57 where dℓis the infinitessimal tangent vector to the curve. This notation fails to specify where the curve begins and ends. If the curve starts at a and ends at b, the same integral is usually written Z b a A · dℓ, (4.2) but the problem here is that the curve is not specified. The best notation would be Z b a;γ A · dℓ, (4.3) or some such, but unfortunately, no one does this. Thus, one usually has to decide from context what is going on. As written, the line integral is merely a formal expression. We give it meaning by the mathematical operation of ‘pullback’, which basically means using the parameterization to write it as a conventional integral over a line segment in Euclidean space. Thus, if we write γ(t) = (x(t), y(t), z(t)), then dℓ/dt = dγ(t)/dt = v is the velocity with which the curve is traversed, so Z γ A · dℓ= Z t1 t0 A(γ(t)) · dℓ dt dt = Z t1 t0 A(γ(t)) · v dt. (4.4) This last expression is independent of the parameterization used, which means it depends only on the curve. Suppose t∗= t∗(t) were some other 58 parameterization. Then we would have Z t∗ 1 t∗ 0 A(γ(t∗)) · v(t∗) dt∗= Z t1 t0 A(γ(t∗(t))) · dγ(t∗(t)) dt∗ dt∗ = Z t1 t0 A(γ(t)) · dγ(t) dt dt dt∗dt∗ = Z t1 t0 A(γ(t)) · dγ(t) dt dt. Example 8 Let A = (4xy, −8yz, 2xz), and let γ be the straight line path from (1, 2, 6) to (5, 3, 5). Every straight line segment from a to b can be parameterized in a natural way by γ(t) = (b−a)t+a. This is clearly a line segment which begins at a when t = 0 and ends up at b when t = 1. In our case we have γ(t) = [(5, 3, 5) −(1, 2, 6)]t + (1, 2, 6) = (4t + 1, t + 2, −t + 6), which implies v(t) = ˙ γ(t) = (4, 1, −1). Thus Z γ A · dℓ = Z 1 0 (4(4t + 1)(t + 2), −8(t + 2)(−t + 6), 2(4t + 1)(−t + 6)) · (4, 1, −1) dt = Z 1 0 (16(4t + 1)(t + 2) −8(t + 2)(−t + 6) −2(4t + 1)(−t + 6)) dt = Z 1 0 (80t2 + 66t −76) dt = 80 3 t3 + 33t2 −76t 1 0 = −49 3 . 59 Physicists usually simplify the notation by writing dℓ= v dt = (dx, dy, dz). (4.5) Although this notation is somewhat ambiguous, it can be used to good effect under certain circumstances. Let A = ∇ϕ for some scalar field ϕ, and let γ be some curve. Then Z γ ∇ϕ · dℓ= Z γ ∂ϕ ∂x, ∂ϕ ∂y , ∂ϕ ∂z  · (dx, dy, dz) = Z γ dϕ = ϕ(b) −ϕ(a). This clearly demonstrates that the line integral (4.4) is indeed the inverse operation to the gradient, in the same way that one dimensional integration is the inverse operation to one dimensional differentiation. Recall that a vector field A is conservative if the line integral R A · dℓ is path independent. That is, for any two curves γ1 and γ2 joining the points a and b, we have Z γ1 A · dℓ= Z γ2 A · dℓ. (4.6) We can express this result another way. As dℓ→−dℓwhen we reverse directions, if traverse the curve in the opposite direction we get the negative of the original path integral: Z γ−1 A · dℓ= − Z γ A · dℓ, (4.7) 60 where γ−1 represents the same curve γ traced backwards. Combining (4.6) and (4.7) we can write the condition for path independence as Z γ1 A · dℓ= − Z γ−1 2 A · dℓ (4.8) ⇒ Z γ1 A · dℓ+ Z γ−1 2 A · dℓ= 0 (4.9) ⇒ I γ A · dℓ= 0, (4.10) where γ = γ1 + γ−1 2 is the closed curve obtained by following γ1 from a to b and then γ−1 2 from b back to a. 12 The line integral of a vector field A is usually called the circulation of A, so A is conservative if the circulation of A vanishes around every closed curve. Example 9 Consider the vector field A = (x2 −y2, 2xy) in the plane. Let γ1 be the curve given by y = 2x2 and let γ2 be the curve given by y = 2√x. Let the endpoints be a = (0, 0) and b = (1, 2). This situation is sufficiently simple that a parameterization is unnecessary. We compute as follows: Z γ1 A · dℓ= Z γ1 (Ax dx + Ay dy) = Z γ1 (x2 −y2) dx + 2xy dy = Z 1 0 (x2 −4x4) dx + 4x3 · 4x dx = x3 3 −4x5 5 + 16x5 5  1 0 = 1 3 + 12 5 = 41 15. In the computation above we substituted in y as a function of x along the curve, then used the x limits. We could just as well have solved for x in terms of y and then solved the integral in the variable y instead. We do this in the next integral 12The circle on the integral sign merely serves to remind us that the integral is taken around a closed curve. 61 to avoid messy square roots. Since x = (y/2)2 along γ2, we get Z γ2 A · dℓ= Z γ2 (Ax dx + Ay dy) = Z γ2 (x2 −y2) dx + 2xy dy = Z 2 0 y4 16 −y2  · y 2 dy + y3 2 dy =  y6 160  2 0 = 2 5. Evidently, these are not equal. Hence the vector field A = (x2 −y2, 2xy) is not conservative. But suppose we began with the vector field A = (x2 −y2, −2xy) instead. Now carrying out the same procedure as above would give R γ1 A·dℓ= −11/3 = R γ2 A·dℓ. Can we conclude from this that A is conservative? No! The reason is that we have only shown that the line integral of A is the same along these two curves between these two endpoints. But we must show that we get the same answer no matter which curve and which endpoints we pick. Now, this vector field is sufficiently simple that we can actually tell that it is indeed conservative. We do this by observing that A·dℓis an exact differential, which means that it can be written as dϕ for some function ϕ. In our case the function ϕ = (x3/3) −xy2. (See the discussion below.) Hence Z γ A · dℓ= Z (1,2) (0,0) dϕ = ϕ(1, 2) −ϕ(0, 0) = −11 3 , (4.11) which demonstrates the path independence of the integral. Comparing this analy-sis to our discussion above shows that the reason why A·dℓis an exact differential is because A = ∇ϕ. Example 9 illustrates the fact that any vector field that can be written as the gradient of a scalar field is conservative. This brings us naturally to the question of determining when such a situation holds. An obvious necessary 62 condition is that the curl of the vector field must vanish (because the curl of a gradient is identically zero). It turns out that this condition is also sufficient. That is, if ∇× A = 0 for some vector field A then A = ∇ϕ for some ϕ. This follows from a lemma of Poincar´ e that we will not discuss here. 13 Example 10 Consider again the two vector fields from Example 9. The first one, namely A = (x2 −y2, 2xy, 0), satisfies ∇× A = 4y ˆ k, and so is non-conservative, whereas the second one, namely A = (x2 −y2, −2xy, 0), satisfies ∇× A = 0 and so is conservative, as previously demonstrated. This gives us an easy criterion to test for conservative vector fields, but it does not produce the corresponding scalar field for us. To find this, we use partial integration. Suppose we are given a vector field A = (Ax, Ay, Az). If A = ∇ϕ for some ϕ, then ∂ϕ ∂x = Ax, ∂ϕ ∂y = Ay, and ∂ϕ ∂z = Az. (4.12) Partially integrating these equations gives ϕ = Z Ax dx + f(y, z), (4.13) ϕ = Z Ay dy + g(x, z), and (4.14) ϕ = Z Az dz + h(x, y), (4.15) where f, g, and h are unknown functions of the given variables. 14 If (4.13)-13There is one caveat, which is that the conclusion only holds if the region over which the curl vanishes is simply connected. Roughly speaking, this means the region has no ‘holes’. 14We are integrating the vector field components with respect to one variable only, which is why it is called partial integration. 63 (4.15) can be solved consistently for a function ϕ, then A = ∇ϕ. Example 11 Let A = (x2 −y2, −2xy, 0) as in Example 9. To show that it can be written as the gradient of a scalar field we partially integrate the components to get ϕ = Z (x2 −y2) dx + f(y, z) = 1 3x3 −xy2 + f(y, z), ϕ = Z (−2xy) dy + g(x, z) = −xy2 + g(x, z), and ϕ = 0 + h(x, y). These equations can be made consistent if we choose f = 0, g = 1 3x3, and h = 1 3x3 −xy2. (4.16) so ϕ = (x3/3) −xy2 is the common solution. The reader should verify that this procedure fails for the nonconservative vector field A = (x2 −y2, 2xy, 0). 4.2 Surface Integrals Let S be a two dimensional surface in space, and let A be a vector field. Then the flux of A through the surface S is Z S A · dS. (4.17) In this formula dS = ˆ ndS, where dS is the infinitessimal area element on the surface and ˆ n points orthogonally to the surface. As before, this integral is defined in terms of a parameterization. A surface is a two dimensional object, so it depends on two parameters, which we will usually denote by u and v. Let σ(u, v) : R2 →R3 be such a parameterization. As u and v 64 line of constant u on S σv line of constant v on S σu n u v σ S Figure 10: A parameterized surface vary over some domain D ⊆R2, σ(u, v) traces out the surface S in R3. (See Figure 10.) If we fix v and let u vary, then we get a line of constant v on S. The tangent vector field to this line is just σu := ∂σ/∂u. Similarly, σv := ∂σ/∂v gives the tangent vector field to the lines of constant u. The normal vector to the surface 15 is therefore n = σu × σv. (4.18) 15When we say ‘the’ normal vector, we really mean ‘a’ normal vector, because the vector depends on the parametrization. Even if we normalize the vector to have length one there is still some ambiguity, because a surface has two sides, and therefore two unit normal vectors at every point. If one is interested in, say, the flux through a surface in one direction, then one must select the normal vector in that direction. If the surface is closed, then we usually choose the outward pointing normal in order to be consistent with Gauss’ theorem. (See the discussion below.) 65 Now we define the surface integral in terms of its parameterization: Z S A · dS = Z D A(σ(u, v)) · n dudv. (4.19) Once again, one can show the integral is independent of the parameterization by changing variables. Example 12 Let A = (y, 2y, xz) and let S be the paraboloid of revolution obtained by rotating the curve z = 2x2 about the z axis, where 0 ≤z ≤3. To compute the flux integral we must first parameterize the surface. One possible parameterization is σ = (u, v, 2(u2 + v2)), while another is σ = (u cos v, u sin v, 2u2). Let us choose the latter. Then the domain D is 0 ≤u ≤ p 3/2 and 0 ≤v ≤2π. Also, σu = (cos v, sin v, 4u) and σv = (−u sin v, u cos v, 0), so n = σu × σv = ˆ ı ˆ  ˆ k cos v sin v 4u −u sin v u cos v 0 = (−4u2 cos v, −4u2 sin v, u). (Note that this normal points inward towards the z axis. If you were asked for the 66 flux in the other direction you would have to use −n instead.) Thus Z S A · dS = Z D A(σ(u, v)) · n dudv = Z D (u sin v, 2(u sin v), (u cos v)(2u2)) · (−4u2 cos v, −4u2 sin v, u) dudv = Z D (−4u3 sin v cos v −8u3 sin2 v + 2u4 cos v)dudv The first and third terms disappear when integrated over v, leaving only Z S A · dS = −8 Z √ 3/2 0 u3 du Z 2π 0 sin2 v dv = −2u4 √ 3/2 0 · π = −9 2π. 4.3 Volume Integrals We will usually consider only volume integrals of scalar fields of the form Z V f dτ, where dτ is the infinitessimal volume element. For example, in Cartesian coordinates we would compute Z V f(x, y, z) dx dy dz. In any other coordinate system we must use the change of variables theorem. Theorem 4.1. Let F : X →Y be a map from X ⊆Rn to Y ⊆Rn, and let f be an integrable function on Y . Then Z Y f dy1 . . . dyn = Z X (f ◦F) | det J| dx1 . . . dxn, (4.20) 67 where J is the Jacobian matrix of partial derivatives: Jij = ∂yi ∂xj . (4.21) The map F is the change of variables map, given explicitly by the functions yi = Fi(x1, x2, . . . , xn). Geometrically, the theorem says that the integral of f over Y is not equal to the integral of f ◦F over X, because the map-ping F distorts volumes. The Jacobian factor compensates precisely for this distortion. Example 13 Let f = x2y2z2, and let V be a sphere of radius 2. It makes sense to change variables from Cartesian to spherical polar coordinates because then the domain of integration becomes much simpler. In the above theorem the ‘x’ coordinates are (r, θ, φ) and the ‘y’ coordinates are (x, y, z). The Jacobian factor can be computed from (3.14): |J| = sin θ cos φ r cos θ cos φ −r sin θ sin φ sin θ sin φ r cos θ sin φ r sin θ cos φ cos θ −r sin θ 0 = r2 sin θ. Hence Z V f(x, y, z) dx dy dz = Z 2 0 r2 dr Z π 0 sin θ dθ Z 2π 0 dφ f(r, θ, φ) = Z 2 0 r2 dr Z π 0 sin θ dθ Z 2π 0 dφ (r2 sin2 θ cos2 φ) · (r2 sin2 θ sin2 φ)(r2 cos2 θ). The r integral is Z 2 0 r8 dr = 1 9r9 = 512 9 . 68 The θ integral is Z π 0 sin θ(sin4 θ cos2 θ) dθ = − Z π 0 (1 −cos2 θ)2(cos2 θ)d(cos θ) =  −cos3 θ 3 + cos5 θ 5  π 0 = 2 3 −2 5 = 4 15. The φ integral is Z 2π 0 sin2 φ cos2 φ dφ = 1 4 Z 2π 0 sin2 2φ dφ = 1 8 Z 4π 0 sin2 φ′ dφ′ = π 4 . Putting everything together gives Z V f dτ = 512 9 · 4 15 · π 4 = 512 135π. 4.4 Problems 1) Verify that each of the following vector fields F is conservative in two ways: first by showing that ∇× F = 0, and second by finding a function ϕ such that F = ∇ϕ. (a) F = (1, −z, −y). (b) F = (3x2yz −3y, x3z −3x, x3y + 2z). (c) F = y p 1 −x2y2 , x p 1 −x2y2 , 0 ! . 2) By explicitly evaluating the line integrals, calculate the work done by the force field F = (1, −z, −y) on a particle when it is moved from (1, 0, 0) to (−1, 0, π) (i) along the helix (cos t, sin t, t), and (ii) along the straight line joining the two points. (iii) Do you expect your answers to (i) and (ii) to be the same? Explain. 69 3) Let A = (y2, 2x, 1). Evaluate the line integral Z γ A · dℓ between (0, 0, 0) and (1, 1, 1), where (a) γ is the piecewise linear path from (0, 0, 0) to (1, 0, 0) to (1, 0, 1) to (1, 1, 1). (b) γ is the path going from (0, 0, 0) to (1, 1, 0) along an arc of the circle x2 + y2 −2y = 0, and then from (1, 1, 0) to (1, 1, 1) along a straight line segment. (c) Should the answers to (a) and (b) have been the same? Explain. 4) The helicoid admits the parameterization σ = (u cos v, u sin v, av). Compute the area of the helicoid over the domain 0 ≤u ≤1 and 0 ≤v ≤2π. 5) Compute the surface integral R S F · dS, where F = (1, x2, xyz) and the surface S is given by z = xy, with 0 ≤x ≤y and 0 ≤y ≤1. 5 Integral Theorems Let f be a differentiable function on the interval [a, b]. Then, by the Funda-mental Theorem of Calculus Z b a f ′(x) dx = f(b) −f(a). (5.1) In this section we discuss a generalization of this theorem to functions of many variables. The best formulation of this theorem is expressed in the language of manifolds and differential forms, which are, unfortunately, slightly beyond the scope of these lectures. Therefore we will have to content ourselves with rather more pedestrian formulations. 70 5.1 Green’s Theorem The simplest generalization of the Fundamental Theorem of Calculus to two dimensions is Green’s Theorem. 16 It relates an area integral over a region to a line integral over the boundary of the region. Let R be a region in the plane with a simple closed curve boundary ∂R.17 Then we have Theorem 5.1 (Green’s Theorem). For any differentiable functions P and Q in the plane, Z R ∂Q ∂x −∂P ∂y  dx dy = I ∂R (P dx + Q dy), (5.2) where the boundary ∂R is traversed counterclockwise. Sketch of Proof. The proof of Green’s theorem is included in most vector calculus textbooks, but it is worth pointing out some of the basic ideas involved. Consider a square in the plane with lower left corner at (a, a) and upper right corner at (b, b). Then Z R ∂P ∂y dx dy = Z b a dx Z b a ∂P(x, y) ∂y dy = Z b a (P(x, b) −P(x, a)) dx, where the last equality follows from the Fundamental Theorem of Calculus. The boundary ∂R consists of the four sides of the square, oriented as follows: γ1 from (a, a) to (b, a), γ2 from (b, a) to (b, b), γ3 from (b, b) to (a, b), and γ4 16Named after the British mathematician and physicist George Green (1793-1841). 17In this context, the notation ∂R does not mean ‘derivative’. Instead it represents the curve that bounds the region R. A simple closed curve is a closed curve that has no self-intersections. 71 from (a, b) to (a, a). Considering the meaning of the line integrals involved, we see that Z γ1 P dx = Z b a P(x, a) dx, Z γ3 P dx = Z a b P(x, b) dx, and (since x is fixed along γ2 and γ4), Z γ2 P dx = Z γ4 P dx = 0, from which it follows that I ∂R P dx = I γ1+γ2+γ3+γ4 P dx = Z b a (P(x, a) −P(x, b)) dx. Thus we have shown that Z R ∂P ∂y dx dy = − I ∂R P dx. A similar argument yields Z R ∂Q ∂x dx dy = I ∂R Q dy. Adding these two results shows that the theorem is true for squares. Now consider a rectangular region R′ consisting of two squares R1 and R2 sharing an edge e. By definition of the integral as a sum, Z R′ f dx dy = Z R1 f dx dy + Z R2 f dx dy 72 for any function f. But also I ∂R′(g dx + h dy) = I ∂R1 (g dx + h dy) + I ∂R2 (g dx + h dy) for any functions g and h, because the contribution to the line integral over ∂R1 coming from e is exactly canceled by the contribution to the line integral over ∂R2 coming from e, since e is traversed one direction in ∂R1 and the opposite direction in ∂R2. It follows that Green’s theorem holds for the rectangle R′, and, by extension, for any region that can be obtained by pasting together squares along their boundaries. By taking small enough squares, any region in the plane can be built this way, so Green’s theorem holds in general. 5.2 Stokes’ Theorem Consider a three dimensional vector field of the form A = Pˆ ı + Qˆ . Note that ∇× A = ˆ ı ˆ  ˆ k ∂x ∂y ∂z P Q 0 = (∂xQ −∂yP)ˆ k. (5.3) Let S be a region in the xy plane with boundary ∂S. Let dS = ˆ k dx dy be the area element on S. Then Green’s theorem can be written Z S (∇× A) · dS = I ∂S A · dℓ. (5.4) By a similar argument to that given above in the proof of Green’s theorem, this formula holds for any reasonable surface S in three dimensional space, provided ∂S is traversed in such a way that the surface normal points ‘up-73 wards on the left’ at all times. (Just paste together infinitessimal squares to form S.) In the general case it is known as Stokes’ Theorem. 18 Note that this is consistent with the results of Section 4.1. If the vector field A is conservative, then the right side of (5.4) vanishes for every closed curve. Hence the left side of (5.4) vanishes for every open surface S. The only way this can happen is if the integrand vanishes everywhere, which means that A is irrotational. Thus, to test whether a vector field is conservative we need only check whether its curl vanishes. 5.3 Gauss’ Theorem Yet another integral formula, called Gauss’ Theorem 19 or the divergence theorem has the following statement. Let V be a bounded three dimensional region with two dimensional boundary ∂V oriented so that its normal vector points everywhere outward from the volume. Then, for any well behaved vector field A, Z V (∇· A) dτ = I ∂V A · dS, (5.5) where dτ is the infinitessimal volume element of V . 5.4 The Generalized Stokes’ Theorem There is a clear pattern in all the integral formulae (5.1), (5.4), and (5.5). In each case we have the integral of a derivative of something over an oriented n dimensional region equals the integral of that same something over the 18Named after the British mathematician and physicist Sir George Gabriel Stokes (1819-1903). 19Not to be confused with Gauss’ Law. 74 oriented n −1 dimensional boundary of the region. 20 This idea is made rigorous by the theory of differential forms. Basically, a differential form ω is something that you integrate. Although the theory of differential forms is beyond the scope of these lectures, I cannot resist giving the elegant gen-eralization and unification of all the results we have discussed so far, just to whet your appetite to investigate the matter more thoroughly on your own: Theorem 5.2 (Generalized Stokes’ Theorem). If ω is any smooth n−1 form with compact support on a smooth oriented n-dimensional surface M, and if the boundary ∂M is given the induced orientation, then Z M dω = Z ∂M ω. (5.6) 5.5 Problems 1) Evaluate H S A · dS using Gauss’ theorem, where A = (x2 −y2, 2xyz, −xz2), and the surface S bounds the part of a ball of radius 4 that lies in the first octant. (The ball has equation x2 + y2 + z2 ≤16, and the first octant is the region with x ≥0, y ≥0, and z ≥0.) 20In the case of (5.1) we have an integral over a line segment [a, b] (thought of as oriented from a to b) of the derivative of a function f equals the integral over a zero dimensional region (thought of as oriented positively at b and negatively at a) of f itself, namely f(b) −f(a). 75 A Permutations Let X = {1, 2, . . . , n} be a set of n elements. Informally, a permutation of X is just a choice of ordering for the elements of X. More formally, a permutation of X is a bijection 21 σ : X →X. The collection of all permutations of X is called the symmetric group on n elements and is denoted Sn. It contains n! elements. Permutations can be represented in many different ways, but the simplest is just to write down the elements in order. So, for example, if σ(1) = 2, σ(2) = 4, σ(3) = 3, and σ(4) = 1 then we write σ = 2431. The identity permutation, sometimes denoted e, is just the one satisfying σ(i) = i for all i. For example, the identity permutation of S4 is e = 1234. If σ and τ are two permutations, then the product permutation στ is the composite map σ ◦τ. That is, (στ)(i) = σ(τ(i)). For example, if τ(1) = 4, τ(2) = 2, τ(3) = 3, and τ(4) = 1, then τ = 4231 and στ = 2431. The inverse of a permutation σ is just the inverse map σ−1, which satisfies σσ−1 = σ−1σ = e. A transposition is a permutation that switches two numbers and leaves the rest fixed. For example, the permutation 4231 is a transposition, because it flips 1 and 4 and leaves 2 and 3 alone. It is not too difficult to see that Sn is generated by transpositions. This means that any permutation σ may be written as the product of transpositions. Definition. A permutation σ is even if it can be expressed as the product of an even number of transpositions, otherwise it is odd. The sign of a permutation σ, written (−1)σ, is +1 if it is even and −1 if it is odd. 21A bijection is a map that is one-to-one (so that i ̸= j ⇒σ(i) ̸= σ(j)) and onto (so that for every k there is an i such that σ(i) = k). 76 Example 14 One can show that the sign of a permutation is the number of transpositions required to transform it back to the identity permutation. So 2431 is an even permutation (sign +1) because we can get back to the identity permutation in two steps: 2431 1↔2 − − − →1432 2↔4 − − − →1234. Although a given permutation σ can be written in many different ways as a product of transpositions, it turns out that the sign of σ is always the same. Furthermore, as the notation is meant to suggest, (−1)στ = (−1)σ(−1)τ. Both these claims require proof, which we omit. B Determinants Definition. Let A by an n × n matrix. The determinant of A, written det A or |A|, is the scalar given by det A := X σ∈Sn (−1)σA1σ(1)A2σ(2) . . . Anσ(n). (B.1) Remark. In general, determinants are difficult to compute because the above sum has n! terms. There are tricks for special kinds of determinants, but few techniques for general matrices. One general method that works nicely in a wide variety of circumstances is called Dodgson condensation, named after Charles Lutwidge Dodgson, also known as Lewis Carroll, the inventor of Alice in Wonder-land. (Look it up.) 77 Example 15 det  a11 a12 a21 a22  = a11 a12 a21 a22 = a11a22 −a12a21. (B.2) Example 16 det I = 1 because the only term contributing to the sum in (B.1) is the one in which σ is the identity permutation, and its sign is +1. Definition. The transpose AT of the matrix A has components (AT)ij := Aji. (B.3) Remark. The transpose matrix is obtained simply by flipping the matrix about the main diagonal, which runs from A11 to Ann. Lemma B.1. det AT = det A. (B.4) Proof. An arbitrary term of the expansion of det A is of the form (−1)σA1σ(1)A2σ(2) . . . Anσ(n). (B.5) As each number from 1 to n appears precisely once among the set σ(1), σ(2), . . . , σ(n), the product may be rewritten (after some rearrangement) as (−1)σAσ−1(1)1Aσ−1(2)2 . . . Aσ−1(n)n, (B.6) 78 where σ−1 is the inverse permutation to σ. For example, suppose σ(5) = 1. Then there would be a term in (B.5) of the form A5σ(5) = A51. This term appears first in (B.6), as σ−1(1) = 5. Since a permutation and its inverse both have the same sign (because σσ−1 = e implies (−1)σ(−1)σ−1 = 1), Equation (B.6) may be written (−1)σ−1Aσ−1(1)1Aσ−1(2)2 . . . Aσ−1(n)n. (B.7) Hence det A = X σ∈Sn (−1)σ−1Aσ−1(1)1Aσ−1(2)2 . . . Aσ−1(n)n. (B.8) As σ runs over all the elements of Sn, so does σ−1, so (B.8) may be written det A = X σ−1∈Sn (−1)σ−1Aσ−1(1)1Aσ−1(2)2 . . . Aσ−1(n)n. (B.9) But this is just det AT. Remark. Equation (B.9) shows that we may also write det A as det A := X σ∈Sn (−1)σAσ(1)1Aσ(2)2 . . . Aσ(n)n. (B.10) B.1 The Determinant as a Multilinear Map Recall that a map T : Rn →R is linear if T(av + bw) = aTv + bTw for all vectors v and w and all scalars a and b. A map S : Rn × Rn × · · · × Rn →R 79 is multilinear if S is linear on each entry. That is, we have S(. . . , av + bw, . . . ) = aS(. . . , v, . . . ) + bS(. . . , w, . . . ). (B.11) Theorem B.2. The determinant, considered as a map on the rows or columns of the matrix, is multilinear. Proof. We show that the determinant is linear on the first row of A. A similar argument then shows that it is linear on all the rows or columns, which means it is a multilinear function. Let A(av + bw, . . . ) be the matrix obtained by replacing the first row of A by the vector av + bw. From (B.1), we have det A(av + bw, . . . ) = X σ∈Sn (−1)σ(avσ(1) + bwσ(1))A2σ(2) · · · Anσ(n) = a X σ∈Sn (−1)σvσ(1)A2σ(2) · · · Anσ(n) + b X σ∈Sn (−1)σwσ(1)A2σ(2) · · · Anσ(n) = a det A(v, . . . ) + b det A(w, . . . ). Lemma B.3. (1) The determinant changes sign whenever any two rows or columns are interchanged. (2) The determinant vanishes if any two rows or columns are equal. (3) The determinant is unchanged if we add a multiple of any row to another row or a multiple of any column to another column. 80 Proof. Let B be the matrix A except with rows 1 and 2 flipped. Then det B = X σ∈Sn (−1)σB1σ(1)B2σ(2) · · · Bnσ(n) = X σ∈Sn (−1)σA2σ(1)A1σ(2) · · · Anσ(n) = X σ′τ∈Sn (−1)σ′τA2(σ′τ)(1)A1(σ′τ)(2) · · · An(σ′τ)(n). (B.12) In the last sum we have written the permutation σ in the form σ′τ, where τ is the transposition that flips 1 and 2 and σ′ is some other permutation. By definition, the action of σ′τ on the numbers (1, 2, 3, . . . , n) is the same as the action of σ′ on the numbers (2, 1, 3, . . . , n). But by the properties of the sign, (−1)σ′τ = (−1)σ′(−1)τ = −(−1)σ′, because all transpositions are odd. Also, σ′ ranges over all permutations of Sn as σ′τ does, because the map from Sn to Sn given by right multiplication by τ is bijective. Putting all this together (and switching the order of two of the A terms in (B.12)) gives det B = − X σ′∈Sn (−1)σ′A1σ′(1)A2σ′(2) · · · Anσ′(n) = −det A. (B.13) The same argument holds for columns by starting with (B.10) instead. This proves property (1). Property (2) then follows immediately, because if B is obtained from A by switching two identical rows (or columns), then B = A, so det B = det A. But by Property 1, det B = −det A, so det A = −det A = 0. Property (3) now follows by the multilinearity of the determinant. Let v be 81 the first row of A and let w be any another row of A. Then, for any scalar b, det A(v + bw, . . . ) = det A(v, . . . ) + b det A(w, . . . ) = det A, (B.14) because the penultimate determinant has two identical rows (w appears in the first row and its original row) and so vanishes by Property (2). The same argument works for any rows or columns. B.2 Cofactors and the Adjugate We now wish to derive another way to compute the determinant. To this end, let us investigate the coefficient of A11 in det A. By (B.1) it must be X σ′∈Sn (−1)σ′A2σ′(2) . . . Anσ′(n), (B.15) where σ′ means a general permutation in Sn that fixes σ(1) = 1. But this means the sum in (B.15) extends over all permutations of the num-bers {2, 3, . . . , n}, of which there are (n −1)!. A moment’s reflection reveals that (B.15) is nothing more than the determinant of the matrix obtained from A by removing the first row and first column. As this idea reappears later, we introduce some convenient notation. The n −1 by n −1 matrix obtained from A by deleting the ith row and jth column is denoted A(i|j). Definition. Let Aij be an element of a matrix A. The minor of Aij is det A(i|j). By the previous discussion, the coefficient of A11 appearing in det A is precisely its minor, namely det A(1|1). Now consider a general element Aij. What is its coefficient in det A? Well, consider the matrix A′ obtained from A by moving the ith row up to the first row. To get A′ we must execute 82 i −1 adjacent row flips, so by Lemma B.3, det A′ = (−1)i−1 det A. Now consider the matrix A′′ obtained from A′ by moving the jth column left to the first column. Again by Lemma B.3 we have det A′′ = (−1)j−1 det A′. So det A′′ = (−1)i+j det A. Now the element Aij appears in the (11) position in A′′, so by the reasoning used above, its coefficient in det A′′ is the determinant of A′′(1|1). But this is just det A(i|j). Hence the coefficient of Aij in det A is (−1)i+j det A(i|j). This leads to another Definition. The signed minor or cofactor of Aij is the number given by (−1)i+j det A(i|j). We denote this number by Aij We conclude that the coefficient of Aij in det A is just its cofactor Aij. Now consider the expression A11A11 + A12A12 + · · · + A1nA1n. (B.16) Thinking of the Aij as independent variables, each term in (B.16) is distinct (because, for example, only the first term contains A11, etc.). Moreover, each term appears in (B.16) precisely as it appears in det A (with the correct sign and correct products of elements of A). Finally, (B.16) contains n(n−1)! = n! terms, which is the number that appear in det A. So (B.16) must be det A. Equation (B.16) is called the (Laplace) expansion of det A by the first row. Thinking back over the argument of the previous paragraph we see there is nothing particularly special about the first row. We could have written a corresponding expression for any row or column. Hence we have proved the following 83 Lemma B.4. The determinant of A may be written det A = n X j=1 AijAij, (B.17) for any i, or det A = n X i=1 AijAij, (B.18) for any j. Remark. This proposition allows us to write the following odd looking but often useful formula for the derivative of the determinant of a matrix with respect to one of its elements (treating them all as independent variables): ∂ ∂Aij det A = Aij. (B.19) We may derive another very useful formula from the following considera-tions. Suppose we begin with a matrix A and substitute for the ith row a new row of elements labeled Bij, where j runs from 1 to n. Now, the cofactors of the Bij in the new matrix are obviously the same as those of the Aij in the old matrix, so we may write the determinant of the new matrix as, for instance, Bi1Ai1 + Bi2Ai2 + · · · + BinAin. (B.20) Of course, we could have substituted a new jth column instead, with similar results. Now suppose we let the Bij be the elements of any row of A other than the ith. Then the expression in Equation (B.20) will vanish, as the determinant 84 of any matrix with two identical rows is zero. This gives us the following result: Ak1Ai1 + Ak2Ai2 + · · · + AknAin = 0, k ̸= i. (B.21) Again, a similar result holds for columns. We call the cofactors appearing in (B.20) alien cofactors, because they are the cofactors properly correspond-ing to the elements Aij, j = 1, . . . , n, of the ith row of A rather than the kth row. We may summarize (B.21) by saying that expansions in terms of alien cofactors vanish identically. Now we have the following Definition. The adjugate matrix of A, written adj A is the transpose of the matrix of cofactors. That is, (adj A)ij = Aji. Lemma B.5. For any matrix A we have A(adj A) = (adj A)A = (det A)I. (B.22) Proof. Consider the ikth element of A(adj A): [A(adj A)]ik = n X j=1 Aij(adj A)jk = n X j=1 AijAkj. (B.23) If i ̸= k this is an expansion in terms of alien cofactors and vanishes. If i = k then this is just the determinant of A. Hence [A(adj A)]ik = (det A)δik. This proves the first half. To prove the second half, note that (adj A)T = (adj AT). That is, the transpose of the adjugate is the adjugate of the transpose. (Just trace back the definitions.) Hence, using the result (whose proof is left to the reader) that (AB)T = BTAT for any matrices A and B, [(adj A)A)]T = AT(adj A)T = ATadj AT = (det AT)I = (det A)I. (B.24) 85 Definition. A matrix A is singular if det A = 0 and non-singular oth-erwise. Lemma B.6. A matrix A is invertible if and only if it is non-singular. If it is non-singular, its inverse is given by the expression A−1 = 1 det A adj A. (B.25) Proof. Follows immediately from Lemma B.5. B.3 The Determinant as Multiplicative Homomorphism Theorem B.7. Let {vi} and {wj} be two collections of n vectors each, related by a matrix A according to wj = n X i=1 Aijvi. (B.26) Let D(v1, v2, . . . , vn) be the determinant of the n × n matrix whose rows are the vectors v1, . . . , vn. 22 Then D(w1, w2, . . . , wn) = (det A) D(v1, v2, . . . , vn). (B.27) Proof. By hypothesis D(w1, w2, . . . , wn) = D(A11v1 + A21v2 + · · · + An1vn, . . . , A1nv1 + A2nv2 + · · · + Annvn). 22Later this expression will mean the determinant of the n × n matrix whose columns are the vectors v1, . . . , vn. This will not affect any of the results, only the arguments. 86 Expanding out the right hand side using the multilinearity property of the determinant gives a sum of terms of the form Aσ(1)1Aσ(2)2 . . . Aσ(n)nD(vσ(1), . . . , vσ(n)), where σ is an arbitrary map of {1, 2, . . . , n} to itself. If σ is not a bijection (i.e., a permutation) then two vector arguments of D will be equal and the entire term will vanish. Hence the only terms that will appear in the sum are those for which σ ∈Sn. But now, by a series of transpositions of the arguments, we may write D(vσ(1), . . . , vσ(n)) = (−1)σD(v1, v2, . . . , vn), where (−1)σ is the sign of the permutation σ. Hence D(w1, w2, . . . , wn) = X σ∈Sn (−1)σAσ(1)1Aσ(2)2 . . . Aσ(n)nD(v1, . . . , vn). This brings us to the main theorem of this section, which is a remarkable multiplicative property of determinants. 23 23Let X and Y be sets, each equipped with a natural multiplication operation. So, for example, given two elements x1 and x2 in X, their product x1x2 also belongs to X (and similarly for Y ). If φ maps elements of X to elements of Y , and if φ(x1x2) = φ(x1)φ(x2), then we say that φ is a multiplicative homomorphism from X to Y . (The word homomorphism comes from the Greek ‘oµoζ’ (‘homos’), meaning the same, and ‘µoρφη’ (‘morphe’), meaning shape or form.) Equation (B.28) is expressed mathematically by saying that the determinant is a multiplicative homomorphism from the set of matrices to the set of scalars. Incidentally, both matrices and scalars also come equipped with an addition operation, which makes them into objects called rings. A homomorphism that respects both the additive and multiplicative properties of a ring is called a ring homomorphism. But the determinant map from matrices to scalars is nonlinear (that is, det(A + B) ̸= det A + det B), so the determinant fails to be a ring homomorphism. 87 Theorem B.8. For any two matrices A and B, det (AB) = (det A)(det B). (B.28) Proof. Choose (v1, v2, . . . , vn) = (ˆ e1, ˆ e2, . . . , ˆ en) where ˆ ei is the ith canonical basis vector of Rn, and let wj = n X i=1 (AB)ijvi. (B.29) Let D(w1, w2, . . . , wn) be the determinant of the matrix whose rows are the vectors w1, . . . , wn. Then by Theorem B.7, D(w1, w2, . . . , wn) = (det (AB))D(ˆ e1, ˆ e2, . . . , ˆ en). (B.30) On the other hand, if uk = n X i=1 Aikvi, (B.31) then, again by Theorem B.7, D(u1, u2, . . . , un) = (det A)D(ˆ e1, ˆ e2, . . . , ˆ en). (B.32) But expanding (B.29) and using (B.31) gives wj = n X i=1 n X k=1 AikBkjvi (B.33) = n X k=1 Bkjuk. (B.34) 88 So using Theorem B.7 again we have, from (B.34) D(w1, w2, . . . , wn) = (det B)D(u1, u2, . . . , un). (B.35) Combining (B.32) and (B.35) gives D(w1, w2, . . . , wn) = (det B)(det A)D(ˆ e1, ˆ e2, . . . , ˆ en). (B.36) The proposition now follows by comparing (B.30) and (B.36) and using the fact that D(ˆ e1, ˆ e2, . . . , ˆ en) = 1. Corollary B.9. If A is invertible we have det (A−1) = (det A)−1. (B.37) Proof. Just use Theorem B.8: 1 = det I = det (AA−1) = (det A)(det A−1). (B.38) B.4 Cramer’s Rule A few other related results are worth recording. Theorem B.10 (Cramer’s Rule). Let v1, v2, . . . , vn be n column vectors. Let x1, . . . , xn ∈k be given, and define v = n X j=1 xjvj. (B.39) 89 Then, for each i we have xiD(v1, v2, . . . , vn) = D(v1, . . . , v |{z} ith place , . . . , vn). (B.40) Proof. Say i = 1. By multilinearity we have D(v,v2, . . . , vn) = n X j=1 xjD(vj, v2, . . . , vn). (B.41) By Lemma B.3 every term on the right hand side is zero except the term with j = 1. Remark. Cramer’s rule allows us to solve simultaneous sets of linear equations (although there are easier ways). Example 17 Consider the following system of linear equations: 3x1 + 5x2 + x3 = −6 x1 −x2 + 11x3 = 4 7x2 −x3 = 1. We may write this as x1     3 1 0    + x2     5 −1 7    + x3     1 11 −1    =     −6 4 1    . The three column vectors on the left correspond to the vectors v1, v2, and v3 above, and constitute the three columns of a matrix we shall call A. By Theorem B.10 this system has the following solution: 90 x1 = 1 det A −6 5 1 4 −1 11 1 7 −1 , x2 = 1 det A 3 −6 1 1 4 11 0 1 −1 , x3 = 1 det A 3 5 −6 1 −1 4 0 7 1 . (Of course, one still has to evaluate all the determinants.) Corollary B.11. Let v1, v2, . . . , vn be the n column vectors of an n by n matrix A. Then these vectors are linearly dependent if and only if D(v1, v2, . . . , vn) = det A = 0. Proof. Suppose the vi are linearly dependent. Then w := P i civi = 0 for some constants {ci}n i=1, not all of which vanish. Suppose ci ̸= 0. Then 0 = D(v1, v2, . . . , w |{z} ith place , . . . , vn) = ci det A, where the first equality follows because a determinant vanishes if one en-tire column vanishes and the second equality follows from Theorem B.10. Conversely, suppose the vi are linearly independent. Then we may write ˆ ei = Pn j=1 Bjivj for some matrix B, where the ˆ ei are the canonical basis vectors of Rn. Then by Theorem B.7, 1 = D(ˆ e1, ˆ e2, . . . , ˆ en) = (det B)D(v1, v2, . . . , vn) = (det B)(det A). (B.42) Hence det A cannot vanish. 91 Remark. Corollary B.11 shows that a set v1, v2, . . . , vn of vectors is a basis for Rn if and only if D(v1, v2, . . . , vn) ̸= 0. 92
188862
https://theses.hal.science/tel-01191658v1/file/these_archivage_3159461o.pdf
HAL Id: tel-01191658 Submitted on 2 Sep 2015 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Study of quantum dimer and partition models on honeycomb lattices Thiago Milanetto Schlittler To cite this version: Thiago Milanetto Schlittler. Study of quantum dimer and partition models on honeycomb lattices. Physics [physics]. Université Pierre et Marie Curie - Paris VI, 2015. English. ￿NNT : 2015PA066120￿. ￿tel-01191658￿ THESE DE DOCTORAT DE L’UNIVERSIT´ E PIERRE ET MARIE CURIE Sp´ ecialit´ e : Physique ´ Ecole doctorale : ≪Physique en ˆ Ile-de-France ≫ r´ ealis´ ee au Laboratoire de Physique Th´ eorique de la Mati ere Condens´ ee (LPTMC) pr´ esent´ ee par Thiago MILANETTO SCHLITTLER pour obtenir le grade de : DOCTEUR DE L’UNIVERSIT´ E PIERRE ET MARIE CURIE Sujet de la these : ´ Etude de mod eles de dimeres et partitions quantiques sur r´ eseaux hexagonaux soutenue le 15 Juin 2015 devant le jury compos´ e de : M. Benjamin CANALS Rapporteur M. Sylvain CAPPONI Rapporteur Mme Leticia F. CUGLIANDOLO Examinateur M. Kirone MALLICK Examinateur M. Fr´ ed´ eric MILA Examinateur M. R´ emy MOSSERI Directeur de th ese Contents Introduction 5 1 Quantum dimer models: Rokhsar and Kivelson model 9 1.1 Quantum Dimer Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.1.1 Rokhsar-Kivelson model . . . . . . . . . . . . . . . . . . . . . . . 10 1.2 Hilbert space and Hamiltonian of the RK model . . . . . . . . . . . . . 11 1.3 Quantum to classical mapping and MC simulation . . . . . . . . . . . . 12 1.3.1 Equivalence to a quantum Ising model on the dual lattice . . . . 13 1.3.2 Approximation by a classical 3D Ising model . . . . . . . . . . . 14 1.3.3 Monte Carlo algorithm with cluster updates . . . . . . . . . . . . 15 1.4 Studied observables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.5 Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.5.1 The star phase (−∞< V/t < (V/t)C) . . . . . . . . . . . . . . . 19 1.5.2 The star to plaquette phase transition . . . . . . . . . . . . . . . 20 1.5.3 The plaquette phase ((V/t)C < V/t < 1) . . . . . . . . . . . . . . 24 1.5.4 From the plaquette phase to the RK point . . . . . . . . . . . . . 26 1.5.5 The Rokhsar-Kivelson point (V/t = 1) . . . . . . . . . . . . . . . 29 1.5.6 Staggered phase (1 < V/t < ∞) . . . . . . . . . . . . . . . . . . . 30 1.6 Variational treatment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2 Quantum dimer models: V0 −V3 model 37 2.1 Generalized QDM Hamiltonian: V0 −V3 model . . . . . . . . . . . . . . 37 2.1.1 Flux and flux density . . . . . . . . . . . . . . . . . . . . . . . . 39 2.1.2 Caveats of the adapted MC algorithm . . . . . . . . . . . . . . . 41 2.2 Phase diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.2.1 Classical limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.2.2 S and H chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.3 Regions of the quantum phase diagram . . . . . . . . . . . . . . . . . . . 49 2.3.1 f = 2 region: staggered phase . . . . . . . . . . . . . . . . . . . . 50 2.3.2 f = 0 region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.3.3 f = 1/2 region: S2 phase . . . . . . . . . . . . . . . . . . . . . . 56 2.3.4 The fan region: 1/2 < f < 2 . . . . . . . . . . . . . . . . . . . . . 56 2.4 Perturbation analysis near the RK point . . . . . . . . . . . . . . . . . . 64 3 4 CONTENTS 3 Planar partitions and quantum dimer models 71 3.1 Partition problems: definition . . . . . . . . . . . . . . . . . . . . . . . . 71 3.2 Applications of the partition problems . . . . . . . . . . . . . . . . . . . 72 3.2.1 Classical systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.2.2 Boundary conditions of the 2D systems . . . . . . . . . . . . . . 76 3.2.3 Quantum partition model, QPM . . . . . . . . . . . . . . . . . . 78 3.3 Monte Carlo simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.3.1 Effects of the boundary conditions on the RK model . . . . . . . 81 3.3.2 Description of the band regions . . . . . . . . . . . . . . . . . . . 85 3.4 Simplex method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.4.1 Configuration space . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.4.2 Description of the method . . . . . . . . . . . . . . . . . . . . . . 93 3.4.3 Implementation and application of the simplex method . . . . . 96 4 Classical planar partitions: from the amoebae to the arctic circle 101 4.1 Description of the constrained corner growth model . . . . . . . . . . . . 102 4.2 Thermodynamic limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.3 Boundary transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Conclusion and perspectives 115 A From the 2D QDM to a 3D CIM 119 B Monte Carlo sampling and 1D cluster updates 121 C Energy and gap evaluation 125 D Dimer sum rules 127 E Perturbative star and plaquette phases 129 F Perturbation of the V0 −V3 model near the RK point 133 Introduction Interacting spin systems in two dimensions have been widely studied over the last decades, both from experimental and theoretical points of view. Of importance in this context is the so-called resonating valence bond (RVB) approach put forward by P. W. Anderson in 1973 in order to analyze the physics of spin 1/2 Heisenberg antiferromagnets. This has later been advocated as a way to study the yet unsolved problem of high-temperature superconductivity. Following Rokhsar and Kivelson , it proves interesting, when studying the low energy properties of these phases, to consider a simpler model, called the quantum dimer model (QDM). In the latter, the SU(2) singlet bonds between the spins are replaced by hard core dimers defined on the edges of the lattice, and the Hilbert space can be built using the classical dimer coverings (fig. 1) as an orthonormal basis. Quantum dimer models have been employed on a wide range of problems, including not only the superconductivity [2, 3], but also frustrated magnets [4, 5, 6, 7, 8], or hardcore bosons . From a theoretical point of view, the QDM’s are interesting due to properties such as topological order, spin liquid phases, and deconfined fractional excitations, which are present depending on the lattice and boundary conditions chosen. One of its main properties is the existence, by construction, of a special point in the phase diagram of the QDM, named the RK point, where the ground state can be determined analytically as a superposition of all possible classical dimer coverings, all with equal weights. Depending on the underlying lattice, this point can be part of a liquid phase, such as Z2 RVB liquid phase, for the 2D triangular lattice [10, 11, 12], or a critical point between different crystalline phases, such as the square and honeycomb 2D lattices . Also, under periodic boundary conditions, the Hilbert space of a QDM model Figure 1: Example of a classical dimer covering on a honeycomb lattice. 5 6 INTRODUCTION on a bipartite lattice (such as the honeycomb lattice) can be divided into a series of isolated topological sectors, in the sense that two states from different sectors cannot be connected through local transformations, only through non-local ones. Due to this, simulations whose main dynamical components are local transformations (as well as exact diagonalizations) have to carried out separately on each one of such flux sectors, and compared, for each value of the Hamiltonian parameter, which sectors carry the ground state configuration. When studying the phase diagram of a QDM, it is common to treat it using numer-ical methods. One of these methods, the exact diagonalization (ED) gives to us direct access to all the observables of the system, including the energy levels and thus the gaps. An exact diagonalization, though, is very expensive in time, processing power and memory, limiting the size of the systems that can be studied with it. Other nu-merical methods, such as Markov chain Monte Carlo simulations (MC), allow one to simulate QDM’s with larger sizes, but they also have their share of disadvantages. In the case of the MC algorithms, one must introduce a system temperature parameter T, which must be taken as near to zero as possible to obtain the behavior of the ground state, at the cost of increasing the simulation times. Also, certain observables promptly obtained with the ED’s, such as the ground state energy and the gap, require some work to be implemented. For our work, we decided use a Monte Carlo algorithm to simulate the QDM’s, comparing the results with exact diagonalizations whenever the system’s size allowed its use, to verify if the MC algorithm works correctly. During this thesis, we will mainly focus in the QDM’s on a honeycomb lattice, for which there are still open questions, such as whenever a intermediary phase of the original proposed by Rokhsar and Kivelson, called the plaquette phase, is gapped or not. Previous work in the literature indicates that this phase is gapped, but through the use of indirect measurements . To study this question, we develop a world-line Quantum Monte Carlo (QMC) algorithm , expanding on the earlier work by Moessner et al. with new dimer-related order parameters. We use this algorithm and new order parameters to describe in deeper details the different ground state phases of this model , and mainly to determinate the ground state energies as well as gaps to the first excited states through the use of the imaginary-time correlation functions. Using this QMC algorithm, we also study a more general version of the RK model, with an additional term to the potential energy. This model, which we called the V0 −V3 model, presents a very interesting phase diagram, with not only the phases and phase transitions of the RK model, but also several transitions between different topological sectors, which are by definition isolated from each other. In particular, we present numerical results that are compatible with the “Cantor deconfinement” scenario near the RK point, proposed by Fradkin et al. for the QDM’s, and which are in accordance with results obtained through perturbative analysis near this point. The initial motivation of this thesis, though, was not the study of the QDM models on a honeycomb lattice with Monte Carlo simulations, but rather from a different an-gle, using the so-called generalized integer partitions problems. An generalized integer partition is, in a few words, an ensemble of integers following a series of order relations, with the latter defining the partition problem. These entities were first proposed at 7 Figure 2: Equivalence between a planar partition, represented as stacks on the left, and a classical dimer covering. the start of the XXth century by MacMahon , who proposed numerous approaches and enumeration solutions which played an important role in the foundation of modern combinatorics . As often the case, combinatorial problems can be put in relation with different and interesting questions of statistical mechanics. In the present case, the so-called 2 dimensional or planar partitions are equivalent to dimer coverings of an hexagonal tiling, ground state configurations of an antiferromagnetic spin model on a triangular lattice, or random rhombus tilings [19, 20, 21]. In all cases, the conditions which define the partition problem imply that dimers or rhombus are constrained to an hexagonal boundary, as in fig. 2. It was therefore tempting to study a “quantum partition problem”, corresponding to a hexagonal quantum dimer problem, to see in particular if the specific boundary conditions may affect the quantum behaviour found with periodic boundary conditions. In chapter 3 this will be studied along two lines: (a) with a QMC algorithm, identical to the used in the standard RK and the V0 −V3 models; and (b) with an approximate method, called the simplex method, which use the prior detailed knowledge of the partition configuration space, which is turned into the Hilbert space of the quantum problem. Still in the context of the partition problems, we can use them to describe other classical problems, such as the crystal corner growth and melting [22, 23]. Using the framework of the planar partition problems, we will propose a thermodynamic model based on a energy proportional to the sum of a partition’s integers. This model, which we studied analytically and through a classical Monte Carlo algorithm, describe a transition from a mathematical shape called the “amoebae”, used in the context of crystal corner melting , to another surface called the arctic circle [24, 25], with a crossover between the shapes instead of a sharp transition. However, transition regimes can be identified when looking to more local parameters. This manuscript will be organized in the following fashion. In the first chapter, we will study the QDM model originally proposed by Rokhsar and Kivelson, describing the world-line Quantum Monte Carlo algorithm used and the new order parameters. In the second chapter, we will propose a generalization of this model (and of the corresponding QMC algorithm) that allows us to explore in details the topological order of the QDM’s. In the third chapter, we will present a brief description of the partition problems applied to a quantum problem, followed by an analysis of the effects 8 INTRODUCTION of the new boundary conditions imposed on the honeycomb lattice and a description of the simplex method. In the fourth chapter, we will present our work on the classical crystal corner growth model based on the planar partitions. Finally, more technical points and deductions will be presented on the appendices. Chapter 1 Quantum dimer models: Rokhsar and Kivelson model We will describe in this chapter the first part of our work with the quantum dimer model on the hexagonal (honeycomb) lattice, proposed by Rokhsar and Kivelson. We studied this model numerically, extending on earlier work by Moessner et al. [26, 5, 13] with new dimer-related order parameters, which describe in deeper details the dif-ferent ground state phases of this model, and mainly allowing us to determine the ground state energies as well as gaps to the first excited states through the use of the imaginary-time correlation functions. In particular, this allows us to determinate that the so-called plaquette phase has a non-zero gap – a point which was previously advo-cated with general arguments and some data for an order parameter, but required a more direct proof. We supplement this numerical study with a variational treatment of the plaquette phase. On the technical side, we will describe an efficient world-line Quantum Monte Carlo algorithm with improved cluster updates that increase accep-tance probabilities by taking account of potential terms of the Hamiltonian already during the cluster construction. A large part of the results shown in this chapter were published in . 1.1 Quantum Dimer Models The original quantum dimer model (QDM) was proposed by Rokhsar and Kivelson as a model to study the properties of superconductors. In their model, the interacting spins on a 2D lattice are no longer the degrees of freedom of the system: instead, the SU(2) singlet bonds between them are used as such, being replaced by hard core dimers defined on the edges of the lattice. The QDM’s, in general, feature a series of interesting properties, such as topological order, spin liquid phases, and deconfined fractional excitations [5, 7]. Before enlarging on the quantum systems, let us say a few words about the classical case. Lattice dimer coverings – the basis states of the Hilbert space in the quantum case – represent already a rich mathematical problem with many connections to statistical physics problems. For a graph defined by its vertices and edges (defining faces, often called plaquettes in the present context), a dimer covering is a decoration of the bonds, 9 10 CHAPTER 1. QUANTUM DIMER MODELS: RK MODEL A B C A B C A B C (a) A B C A B C A B C (b) (c) Figure 1.1: Prototypes of quantum dimer ground states on a honeycomb lattice: (a) star phase, (b) plaquette phase, (c) staggered phase. Edges with a high probability of carrying a dimer are indicated in black, and edges with a ∼50% probability are indicated in gray. The gray hexagons in the plaquette state correspond to benzene-like resonances of a flippable plaquettes. The three triangular sublattices A, B, C are also shown. In the star state, flippable plaquettes occupy two of the sublattices (here, A and B) while, in the plaquette phase, the resonant plaquettes are all on the same sublattice (here, A). such that every vertex is reached by exactly one dimer. The simplest rearrangement mechanism for dimer coverings is provided by so-called plaquette flips. These are applicable for plaquettes around which every second bond has a dimer and the flip amounts to exchanging covered and uncovered bonds, yielding a different valid dimer covering (e.g., ← → for a hexagonal lattice). Dimer coverings are closely related to other configurational problems: for the hexag-onal lattice, these are ground state configurations of a classical Ising-spin model with antiferromagnetic interactions on the (dual) triangular lattice, planar rhombus tilings, and height models [27, 19]. Topological sectors, invariant under the flip operations, can be characterized by so-called fluxes, which will be detailed in the next section. These topological properties depend strongly on the boundary conditions and have consequences on the physics of the quantum dimer model. 1.1.1 Rokhsar-Kivelson model The quantum version, as proposed by Rokhsar and Kivelson, corresponds to consid-ering the set of all dimer coverings of the classical problem as an orthonormal basis spanning the Hilbert space. The Hamiltonian contains kinetic terms that correspond precisely to the elementary flips described above and an additional potential term, proportional to the number of flippable plaquettes. The competition between these kinetic and potential terms leads to a non-trivial phase diagram: for example, when the potential term dominates in amplitude and is of negative sign, the ground state is expected to be dominated by configurations which maximize the number of flippable plaquettes; for the opposite sign, one expects a ground state dominated by dimer con-figurations without flippable plaquettes. As will be discussed, such configurations exist and correspond to the so-called star and staggered phases, respectively. 1.2. HILBERT SPACE AND HAMILTONIAN OF THE RK MODEL 11 In between these two extremes, the phase diagram can display intermediary phases. The ground state is known exactly for the point where kinetic and potential terms are of equal strength. The physics around this so-called Rokhsar-Kivelson (RK) point is expected to be different for bipartite and non-bipartite lattices . In this chapter, we will present an extensive study of the quantum dimer model on the bipartite hexagonal (honeycomb) lattice along the lines already followed by Moessner et al. . In their seminal work, these authors numerically investigated the phase diagram by studying a local order parameter which, in addition to the generic RK transition point, shows a first order transition which separates the star phase from an intermediary phase, the so-called plaquette phase (see figs. 1.1a and 1.1b for sketches of these two phases). Based on the data for three different temperatures, Moessner et al. argued that the plaquette phase should be gapped – a point which conflicts with an earlier analysis by Orland . In our work we used a Quantum Monte Carlo simulations to extend the numerical work by studying new order parameters for different system sizes and temperatures as well as ground-state energies and excitation gaps which we obtain from imaginary-time correlation functions. This leads to a clear confirmation of the gapped nature of the plaquette phase. We shortly explain the reason for conflicting results of ref. . The plan of this chapter is as follows. In section 1.2, the quantum dimer Hamilto-nian is detailed and the nature of the different phases is explained. In section 1.3, we describe the employed world-line Quantum Monte Carlo algorithm which is based on a mapping of the two-dimensional (2D) quantum model to a 3D classical problem, and which we accelerate through suitable cluster updates. We will introduce the observ-ables used with this algorithm in section 1.4, while section 1.5 will present the results of the numerical simulations in terms of the general phase diagram, the analysis of the aforementioned observables, which help characterizing the different phases, and results on ground-state energies and gaps. Finally, 1.6 compares the numerical results with some variational methods. Detailed discussions of some technical issues are delegated to the appendices at the end of this thesis. 1.2 Hilbert space and Hamiltonian of the RK model We consider the 2D hexagonal lattice of spins-1/2 with periodic boundary conditions. As described in the previous section, the quantum dimer models are defined on the subspace spanned by dimer configurations where every spin forms a singlet (|↑, ↓⟩− |↓, ↑⟩)/ √ 2 with one of its three nearest neighbors. These different dimer configurations are used as an orthonormal Hilbert space basis. Models of this type are for example important in the context of resonating valence bond states and superconductivity [2, 4, 6]. Note that different dimer coverings of the lattice (dimer product states) are not orthogonal with respect to the conventional inner product for spin-1/2 systems (⟨σ | σ′⟩= δσσ′). However, as explained in ref. , the two inner products can be related to one another through additional longer-ranged terms in the Hamiltonian that turn out to be not essential. The Hamiltonian (1.1) ˆ HQDM = −t X i (| i⟩⟨ i| + h.c.) + V X i (| i⟩⟨ i| + | i⟩⟨ i|) 12 CHAPTER 1. QUANTUM DIMER MODELS: RK MODEL contains a kinetic term ∝t that flips flippable plaquettes (those with three dimers along the six plaquette edges) and a potential term ∝V that counts the number of flippable plaquettes. The sums in eq. (1.1) run over all plaquettes i of the hexagonal lattice on a torus. The potential term favors (V < 0) or disfavors (V > 0) flippable plaquettes. The only free parameter of this model is hence the ratio V/t. In the rest of this thesis, a plaquette carrying j dimers is called a j-plaquette such that 3-plaquettes are the flippable ones. The configuration space of the system is not simply connected but consists of different topological sectors which are not flip-connected. Each sector is characterized by two (flux) quantum numbers, also known as winding numbers : call A and B the two triangular sublattices of the hexagonal lattice such that all nearest neighbors of any site from A are in B. To compute the flux W through a cut C of the lattice, first orient all cut edges, say, from A to B, weight them by 2 or −1, depending on whether they are covered by a dimer or not, and multiply each weight by ±1 according to the orientation of the edge with respect to C. The flux W is then computed by summing the contributions of all cut edges. Such fluxes W are invariant under plaquette flips. As fluxes through closed contractible curves C are zero, one has two flux quantum numbers Wx and Wy, corresponding to the two topologically distinct closed non-contractible curves on the torus. Notice that these two fluxes characterize an average slope in the height representation of the system. We postpone a more detailed description of the flux sectors to the chapter 2 (in-cluding figs. 2.1 and 2.2, showing how to calculate the flux for a few dimer coverings and the difference of a local and a non-local flip loop), where this concept is more important due to a more complex potential term. For the description of the RK model done in this chapter, it is sufficient to consider that the quantum dimer model’s Hilbert space is divided into flux sectors, which can be connected only through non-local oper-ations. The Quantum Monte Carlo algorithm implemented in section 1.3.3 uses local flip operations to explore the configuration space, and thus cannot visit a flux sector different from the one given by the initial state of the simulation. Let us briefly recall the phase diagram obtained in ref. . Three phases belonging to two different topological sectors have been described. The ground states for the so-called star phase (−∞< V/t < (V/t)C) and the plaquette phase ((V/t)C < V/t < 1) are found in the zero flux sector, while the staggered phase ground-states (1 < V/t < ∞) are in the highest flux sector. Figure 1.1 shows prototype examples of these three phases. The ground states in the zero flux sector can be distinguished using sublattice dimer densities. For that purpose, we recall that the plaquettes of the hexagonal lattice can be separated into three subsets – triangular sublattices A, B and C of disjoint plaquettes, as depicted in fig. 1.1, such that every hexagon of a set shares bonds with three hexagons of the two other sets each. 1.3 Quantum to Classical mapping and Monte Carlo sim-ulation As done by Moessner, Sondhi, and Chandra [26, 5, 13], the 2D quantum dimer model on a hexagonal lattice can be studied by mapping it first to a 2D quantum Ising 1.3. QUANTUM TO CLASSICAL MAPPING AND MC SIMULATION 13 Flip Figure 1.2: Equivalence of dimer coverings of the hexagonal lattice and Ising-spin configurations on the (dual) triangular lattice. Every dimer corresponds to a frustrated bond (↑−↑or ↓−↓). Flipping a plaquette in the hexagonal lattice is equivalent to flipping a spin in the dual lattice. model on the (dual) triangular lattice. The resulting Ising-type quantum model can be studied efficiently using a world-line Quantum Monte Carlo by approximating its partition function and observables by those of a classical 3D Ising-type model (CIM) on a stack of triangular 2D lattices (quantum-classical mapping) as described in the following sub-sections. Furthermore, we accelerate the Monte Carlo simulation of the classical 3D model through suitable cluster updates. 1.3.1 Equivalence to a quantum Ising model on the dual lattice As shown in fig. 1.2, the dual of the hexagonal lattice is the triangular lattice whose vertices are located at the hexagon centers. We assign a spin-1/2 (σi = ±1) to each of the vertices and, as explained in the following, the quantum dimer model eq. (1.1) maps for the limit Jz →∞to the Ising-type quantum model (1.2) ˆ HQIM = Jz X ⟨i,j⟩ ˆ σz i ˆ σz j −t X i ˆ σx i + V X i δ ˆ Bi,0 on the triangular lattice, where {ˆ σx i , ˆ σy i , ˆ σz i } denote the Pauli spin matrices for lattice site i. The operator ˆ Bi := P j∈Ni ˆ σz j , with Ni being the set of the six nearest neighbors of site i, yields for an {ˆ σz i }-eigenstate the value zero, if exactly three of the six bonds starting at site i are frustrated, where a bond is called frustrated if the corresponding two spins are parallel. At the center of each triangle lies a vertex of the hexagonal lattice. For a given dimer covering, one dimer is shared by this vertex and the dimer crosses exactly one of the three edges of the triangle at an angle of 90°(see fig. 1.2). For sufficiently strong Jz, the physics of the quantum Ising model eq. (1.2) is restricted to the subspace spanned by the classical ground states. Those have exactly one frustrated bond per triangle (all other configurations having higher energy). The identification of dimer basis states and Ising basis states is then straightforward. Given a certain dimer configuration, put a spin up on an arbitrary site. Associating frustrated Ising bonds with those that are crossed by a dimer in the given state, we can work inward-out, assigning further Ising spins till the triangular lattice is filled. The state, up or down, for a new site depends on 14 CHAPTER 1. QUANTUM DIMER MODELS: RK MODEL the spin state of an already assigned neighboring site and on whether the corresponding bond is frustrated or not. This mapping of dimer configurations on the hexagonal lattice to spin-1/2 configu-rations on the triangular lattice implies that, for the quantum Ising model, we employ the conventional inner product ⟨σ | σ′⟩= δσσ′ that makes different {ˆ σz i }-eigenstates orthonormal. In the Hamiltonian eq. (1.2), the spin-flip terms ∝t correspond to the kinetic term in the quantum dimer model eq. (1.1). Due to the energetic constraint imposed by Jz →∞, they are only effective for sites where the spin flip does not change the number of frustrated bonds, corresponding to the flippable plaquettes in the dimer model. The term ∝V corresponds exactly to the potential term in the dimer model. The equivalence of the two models is slightly complicated by two issues. First, as we are free to choose the orientation of the first assigned spin, a given dimer configuration corresponds to two spin configurations that differ by a global spin-flip. Second, periodic boundary conditions correspond, for certain topological sectors of dimer configurations, to anti-periodic boundary conditions in the Ising model. The latter point will be important in the second chapter. 1.3.2 Approximation by a classical 3D Ising model To apply a world-line Quantum Monte Carlo algorithm , we can approximate the partition function and observables of the quantum Ising model eq. (1.2) on the 2D triangular lattice by those of a 3D classical Ising model on a stack of 2D triangular lattices by a Trotter-Suzuki decomposition [30, 31]. To this purpose, we separate the Hamiltonian given in eq. (1.2) into two parts (1.3) ˆ HQIM = ˆ Hz + ˆ Hx, with ˆ Hx := −t X i ˆ σx i , (1.4) ˆ Hz := Hz({ˆ σz i }) := Jz X ⟨i,j⟩ ˆ σz i ˆ σz j + V X i δ ˆ Bi,0. (1.5) As detailed in the appendix A, one can use the Trotter-Suzuki decomposition e−β ˆ HQIM =  e−∆β 2 ˆ Hze−∆β ˆ Hxe−∆β 2 ˆ HzN + O(∆β3) of the density operator with imaginary-time step ∆β ≡β/N to determine the param-eters Kz and Kτ for the classical Ising model (1.6) ECIM(σ) = Kz X n Hz(σn) −Kτ X n,i σn i σn+1 i such that the partition functions ZQIM ≡Tr e−β ˆ HQIM and ZCIM = P σ e−ECIM(σ) of the two models coincide (up to a known constant A), ZQIM = A · ZCIM + O(∆β3), (1.7a) with A = [sinh(2∆βt)/2]LN/2, (1.7b) 1.3. QUANTUM TO CLASSICAL MAPPING AND MC SIMULATION 15 as well as expectation values of observables ˆ O = O({ˆ σz i }) that are diagonal in the in the {ˆ σz i }-eigenbasis, ⟨ˆ O⟩QIM=⟨O⟩CIM + O(∆β3), where (1.8a) ⟨ˆ O⟩QIM ≡ 1 ZQIM Tr(e−β ˆ H ˆ O) and (1.8b) ⟨O⟩CIM ≡ 1 ZCIM X σ e−ECIM(σ)O(σ). (1.8c) In these equations, σ = (σn|n = 1, . . . , N) is a vector of classical spin configurations σn = (σn i |i ∈T ) on the triangular lattice T for each of the imaginary-time slices n = 1, . . . , N. As shown in appendix A, the parameters Kz and Kτ of the classical Ising model eq. (1.6) are given by (1.9) Kz = ∆β and e−2Kτ = tanh(∆βt). 1.3.3 Monte Carlo algorithm with cluster updates The representation of expectation values of quantum observables as expectation values of classical observables, eq. (1.8a), is of great value, as it can be evaluated efficiently with a Monte Carlo algorithm by sampling classical states σ. Specifically, one generates a Markov chain of classical states σ with probabilities e−ECIM(σ)/ZCIM and averages O(σ) over these states. The most simple update scheme would be to choose in every iteration of the algo-rithm one of the flippable spins (a spin on site j of time slice n is flippable if and only if P i∈Nj σn i = 0), compute the energy difference ECIM(σ′)−ECIM(σ) that the flipping of the spin would cause, and flip it with a probability that is given by the so-called Metropolis rule as detailed in appendix B. However, as one increases the accuracy by reducing ∆β (for a fixed inverse tem-perature β = N∆β), the coupling Kτ between the time slices increases with Kτ ∝ log(1/∆β) and the classical Ising model, hence, becomes stiffwith respect to the time direction. In the generated states σ, there will occur larger and larger 1D clusters of spins along the time-direction that have the same orientation, σm i = σm+1 i = · · · = σm+n i . Flipping one of the spins inside such a cluster becomes less and less frequent as the associated energy change increases with the increasing coupling Kτ. This would result in an inefficient Monte Carlo sampling with high rejection rates for spin-flips. We avoid this effect by doing 1D cluster updates instead of single-spin updates: in every iteration of the algorithm an initial flippable spin is selected and, in an interme-diate phase, a 1D cluster is grown in the imaginary-time direction before suggesting to flip this cluster as a whole. We further decrease rejection rates by taking into ac-count the changes in the number of flippable spins during the cluster construction. See appendix B for details. Besides computing in this way expectation values of diagonal operators ˆ O = O({ˆ σz i }), one can also evaluate expectation values of non-diagonal operators such as certain correlation functions or, for example, the energy expectation value as described in appendix C. 16 CHAPTER 1. QUANTUM DIMER MODELS: RK MODEL 1.4 Studied observables In the next section, section 1.5, we numerically characterize the phase diagram of the QDM using several observables: the magnetization of the associated Ising model, dimer densities, the ground-state energy, and the energy gap to the first excited state. Let us briefly describe them in the following. In all cases, L is the total number of sites of the triangular lattice (in another words, the system size). Magnetization We compute the root mean square (RMS) magnetization, (1.10) ⟨ˆ m2⟩1/2 := ⟨( X i ˆ σz i /L)2⟩1/2 QIM, for the quantum Ising model,eq. (1.2), as an order parameter to distinguish the star and plaquette phases and to facilitate comparison to earlier work . This version of the magnetization must be used, instead of the straightforward mean magnetization, due to the possibility of a global spin flip. As explained in section 1.3.1, a dimer configuration corresponds to two spin configurations differing by a global spin flip. While such a global operation is not an issue far away from the phase transitions, it is an important factor near one, even for large system sizes. Local and global dimer observables The simulations give access to dimer den-sities ⟨ˆ ni⟩, the average number of dimers on plaquette i. Two-dimensional (contrast) plots of these densities nicely illustrate the ground-state structure, even when its pe-riodicity has to be understood in a probabilistic way. We also evaluate the normalized total numbers of j-plaquettes, ⟨ˆ ρj⟩, for j = 0, 1, 2, 3. Specifically, with j-plaquettes being the plaquettes carrying j dimers, (1.11) ˆ ρj := X i δˆ ni,j/L. As described in appendix D, the plaquette numbers ⟨ˆ ρj⟩obey the sum rule (1.12) ⟨ˆ ρ3⟩−⟨ˆ ρ1⟩−2⟨ˆ ρ0⟩= 0. Notice that ⟨ˆ ρ2⟩does not enter in the sum rule, while changes in the number of 3-plaquettes, which enter both the kinetic and potential energy terms, must be compen-sated by plaquettes with zero dimers or one dimer. ⟨ˆ ρ2⟩is nevertheless constrained by the fact the total number of plaquettes is of course constant, i.e., P3 j=0⟨ˆ ρj⟩= 1. We will often assemble the ⟨ˆ ρj⟩’s of a given state into a single vector ρ = (⟨ˆ ρ0⟩, ⟨ˆ ρ1⟩, ⟨ˆ ρ2⟩, ⟨ˆ ρ3⟩). Sublattice dimer densities As described at the end of section 1.2, the hexagonal plaquettes can be separated into three sets (A, B, C), each forming a triangular lattice, such that every hexagon in a set shares a bond with three hexagons of the two other sublattices each. The “prototype” states of the star and the plaquette phases (fig. 1.1) can be characterized qualitatively in terms of dimer densities in the three sublattices. To this purpose, we can analyze averaged dimer densities on each sublattice and call them ⟨ˆ nA,B,C⟩, i.e., ˆ nA ≡3 L P i∈A ˆ ni etc. 1.4. STUDIED OBSERVABLES 17 It should be stressed that the systems under study may have degenerate (or nearly degenerate) ground states. The star crystal (ground state for V/t = −∞) and the ideal plaquette state (which is not a ground state of ˆ HQDM, see below) are both 3-fold degenerate. For sufficiently large systems, it is expected that this symmetry is kinet-ically broken in the Monte Carlo simulation. However, one cannot fully prevent the system from translating from one typical ground-state configuration to another (even at the level of medium-size patches), smearing out the information carried by these local parameters. This possibility was minimized here by choosing large system sizes and low temperatures. We nevertheless carefully kept track of this possible problem in analyzing the data. Specifically for the sublattice dimer densities, during the course of the Monte Carlo simulation, we have (re)ordered the three sublattices whenever a measurement indicated that such a translation happened. Ground state energy To study the phase diagram, it is certainly of high interest to access the ground-state energy which directly decides what phase prevails for given val-ues of the Hamiltonian parameters. For sufficiently low temperatures in the simulation, the expectation value ⟨ˆ HQIM⟩of the quantum Ising model Hamiltonian corresponds to the ground-state energy. But ˆ HQIM is not a diagonal operator, and hence eq. (1.8) can-not be used. It can nevertheless be evaluated on the basis of imaginary-time correlators ⟨σn i σn+1 i ⟩CIM (the deduction of this equation is presented in appendix C): ⟨ˆ H⟩QIM = 1 N X n " ⟨Hz(σn)⟩CIM + t sinh(2∆βt) X i σn i σn+1 i CIM # (1.13) −Lt coth(2∆βt) + O(∆β2). Energy gap It is important to determine whether a given phase has gapless ex-citations or not. We can estimate the energy gap to the first excited state by fit-ting imaginary-time correlation functions ⟨ˆ A(0) ˆ A†(iτ)⟩. In the classical Ising model eq. (1.6) they correspond to inter-layer correlators with layer distance ∆n = τ/∆β. For sufficiently low temperatures, and τ and β −τ big compared to the gap to the second excited state, the leading terms in the correlation function are of the form a + b · cosh((β/2 −τ)∆E), allowing to fit the upper bound ∆E of the gap. This procedure is explained with more details in appendix C. Before we start describing our numerical results for the RK model, let us talk a bit about the parameter we used to determinate the convergence of our simulations. Measurements done in Markov Chain Monte Carlo algorithms, like the one we used, present by construction strong correlations between them. It is important, then, to not only do enough measurements to have a good sampling of the measured observables, but also to guarantee a strong enough exponential decay of their autocorrelation. To do so, we calculated for each observable the integrated autocorrelation time, τIAC, which most of the time is of the same order of the autocorrelation time of the exponential decay. Let us note the value of an observable measured at a time s as Os. The autocorrelation between the measurements separated by a time interval t is given by (1.14) C(t) = ⟨OsOs+t⟩−⟨Os⟩2 , 18 CHAPTER 1. QUANTUM DIMER MODELS: RK MODEL 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 -3 -2 -1 0 1 2 1/2 V/t L 36 144 324 576 1296 3600 Staggered Star Plaquette Stagerred RK point Figure 1.3: The root-mean square magnetization ⟨ˆ m2⟩1/2, as defined in section 1.4, for the quantum dimer model. The different curves correspond to different system sizes L and are obtained from Monte-Carlo simulations for V/t ≤1 with β = 19.2 and ∆β = 0.02. For all V/t > 1, the staggered state, depicted in fig. 1.1c, is the ground state and hence ⟨ˆ m2⟩1/2 = 0. where the means are calculated over all the measurements. The integrated autocorre-lation is defined as a sum of the normalized autocorrelations over all the time intervals t: (1.15) τIAC = 1 2 X t=1 C(t) C(0). For our algorithm, τIAC converges to zero fast enough after 103 ∼104 measurements, for most values of V/t. The exceptions are the points or regions where determining the ground state is specially difficult, such as near the phase transitions or when the first gap is so small that it becomes difficult to differentiate the ground state and the first excited one. 1.5 Simulation results Let us now study in detail the phase diagram of the quantum dimer model, starting from large negative V/t, i.e., in the star phase. The observables described in the previous section are evaluated in simulations for patches of linear size ℓwith a 60◦ rhombus shape, periodic boundary conditions, and L = ℓ2 plaquettes. In order to be able to separate the lattice into the three sublattices A, B, and C described above, ℓneeds to be a multiple of three. All of the observable values presented here, with 1.5. SIMULATION RESULTS 19 the exception of the energy gaps, are the mean values of several measurements, with each measurement separated by O(L · N) Monte Carlo update attempts. For the first excitation energy gaps, this procedure was applied to the imaginary-time correlation functions used to calculate it. 1.5.1 The star phase (−∞< V/t < (V/t)C) This phase has previously been called the “columnar phase”, in analogy with a corre-sponding phase of the square lattice quantum dimer model, where dimers are aligned along columns. For the hexagonal lattice, this denomination is a bit misleading, mainly due to the columnar arrangement of the dimers in the staggered phase (fig. 1.1c), and we follow ref. in calling it the “star phase” (the name “star phase” originates from the rhombus tiling associated to this dimer configuration in the limit V →−∞, which is either known as the “star tiling” or the “dice lattice”). For large negative V , the potential term dominates the kinetic term and the ground state is dominated by dimer configurations that maximize the number of flippable plaquettes. In the limit V →−∞, the ground state is a 3-fold degenerated crystal (commonly named the “star crystal”) where all plaquettes from two of the three sublattices are flippable, say A and B, while all the plaquettes of the third sublattice are dimer free: (1.16) |ψstar⟩= O i∈A | i⟩ O j∈B | j⟩ Figure 1.1a shows one of these degenerated star crystals. Changing from the dimer to the Ising-spin representation, the sublattices A and B carry spins of equal orien-tation, say σA,B = +1, and all spins on sublattice C have the opposite orientation (σC = −1) such that the RMS magnetization reaches its maximum possible value ⟨ˆ m2⟩1/2 = 1/3. It will decrease as V/t is increased – a behavior which is clearly seen in fig. 1.3. Notice that, for V/t = −3, ⟨ˆ m2⟩1/2 is still very close to the maximum value 1/3. To understand how increasing V/t affects the ideal star state, one can do pertur-bation theory in t/V . The calculation, done up to second order in t/V , is given in appendix E. The result for the ground state energy is plotted in fig. 1.9 (curve labeled as E(2) Star). This first correction to the ideal star state amounts to mixing in configura-tions with one flipped plaquette, and it compares well with the simulation results up to V/t ∼−1. The changes due to these corrections in the ground state can be quantified by the numbers of j-plaquettes, as done in fig. 1.6. In the ideal star state (V/t →−∞), one has (1.17) ρ = (1/3, 0, 0, 2/3). Let us say that the sublattices A and B contain the flippable plaquettes in this limit. After flipping a plaquette in A, the three neighboring plaquettes in sublattice B lose one of their three dimers, which are transferred to the three neighboring plaquettes in C, carrying now one dimer each. As a result, the numbers of 0- and 3-plaquettes are reduced, and accordingly those of 1- and 2-plaquettes are increased. This explains why 20 CHAPTER 1. QUANTUM DIMER MODELS: RK MODEL the curves for ⟨ˆ ρ0⟩and ⟨ˆ ρ3⟩in fig. 1.6 decrease at the same rate up to V/t ∼−1 and why those for ⟨ˆ ρ1⟩and ⟨ˆ ρ2⟩increase and are on top of each other in the same interval. Finally, the contrast plots of the dimer density ⟨ˆ ni⟩and the plot of the sublattice dimer densities ⟨ˆ nA,B,C⟩, presented in figs. 1.7a and 1.7b show that two sublattices have almost three dimers per site, while the third one stays almost empty, and that the difference between them is reduced before reaching a critical point (V/t)C. 1.5.2 The star to plaquette phase transition The most interesting part of the phase diagram is the first order transition occurring at (V/t)C between the star phase and the so-called plaquette phase. At this point, the RMS magnetization ⟨ˆ m2⟩1/2 suddenly drops to a much smaller value which goes to zero in the thermodynamic limit. The position and amplitude of this drop is sensible to three simulation parameters: the quantum inverse temperature β, the imaginary-time step (and inverse temperature discretization) ∆β ≡β/N, and the system size L. Raising L or β, or reducing ∆β, increases the simulation’s precision, while also increasing the number of measurements needed, and thus the simulation time. To find reasonable values for these parameters, we must variate each one, while keeping the other two constant. Figures 1.4a and 1.4b show ⟨ˆ m2⟩1/2 as a function of V/t for, respectively, varying values of ∆β and β. They indicate that the value of (V/t)C will barely variate for β bigger than 19.2 and ∆β smaller than 0.02, and so we used these values for these parameters in most of our simulations. Figure 1.3 displays the RMS magnetization for the whole phase diagram V/t and for various system sizes, while fig. 1.5a provides a zoom close to the transition. From the latter figure, we can see that a system size L bigger or equal to 60 × 60 gives us a sharp enough phase transition. We determined (V/t)C by plotting ⟨ˆ m2⟩1/2 as a function of the inverse (linear) size of system (fig. 1.5b), finding the value (1.18) (V/t)C = −0.228 ± 0.002, consistent with, but more precise than that given in ref. . Of course, the transition can also be observed in the dimer observables. All the normalized j-plaquette numbers ⟨ˆ ρj⟩show a small but clear discontinuity at (V/t)C (fig. 1.6). The discontinuity of ⟨ˆ ρ3⟩(see fig. 1.8b) is of special importance, attesting the first order character of the transition. This order parameter is identical to the percentage of flippable sites of a given state, and thus equal to the derivative of the energy ⟨ˆ HQDM⟩with respect to V (eq. (1.1)). By consequence, the mean energy (fig. 1.8a) has a slope change at (V/t)C – which is barely visible due to small amplitude of the discontinuity. Finally, at least as spectacular as the magnetization drop is the local dimer density change seen in fig. 1.7a. The ground state clearly transforms from a state following the dimer density of the star phase to one following the “prototype” plaquette phase (figs. 1.1a and 1.1b). Figure 1.7b shows this effect more quantitatively: we have a sudden shift in the sublattices dimer densities from a state where two sublattices have almost three dimers, while the third is empty, to one where resonating plaquettes are located on one of the three sublattices. 1.5. SIMULATION RESULTS 21 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 -0.3 -0.25 -0.2 -0.15 -0.1 -0.05 0 1/2 V/t Δβ 0.16 0.08 0.04 0.02 0.01 (a) -0.3 -0.25 -0.2 -0.15 -0.1 -0.05 0 V/t 4.8 9.6 19.2 38.4 (b) Figure 1.4: Locating the transition between the star and plaquette phases while varying the temperature parameters: root-mean square magnetization ⟨ˆ m2⟩1/2 as a function of V/t, with (a) β = 19.2 constant and different values of ∆β, and with (b) ∆β = 0.02 constant and for different temperatures. In both cases, L = 81 × 81. 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 -0.3 -0.25 -0.2 -0.15 -0.1 -0.05 0 1/2 V/t L 36 144 324 576 1296 3600 6561 11664 (a) 0 0.03 0.06 0.09 0.12 0.15 0.18 1/l V/t -0.3 -0.27 -0.24 -0.23 -0.226 -0.22 -0.21 -0.15 -0.1 0 (b) Figure 1.5: Locating the transition between the star and plaquette phases while varying the system size: root-mean square magnetization ⟨ˆ m2⟩1/2 for (a) different lattice sizes L as a function of V/t, and (b) plotted for different values of V/t as a function of the inverse linear system size 1/ℓ. In both cases, β = 19.2 and ∆β = 0.02. 22 CHAPTER 1. QUANTUM DIMER MODELS: RK MODEL 0 0.2 0.4 0.6 0.8 1 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 < i> V/t 0 1 2 3 Figure 1.6: Normalized numbers of j-plaquettes, ⟨ˆ ρj⟩, for the zero flux sector, system size L = 81 × 81, β = 19.2, and ∆β = 0.02. Around (V/t)C, a finer grid of points was used to resolve the jumps in the densities. In that region, data points are not marked by symbols. Although the global ground state is not in the zero flux sector for V/t > 1, data obtained for the zero flux sector is also shown for that region and is discussed in the text. 0 0.5 1 1.5 2 2.5 3 (a) 0 0.5 1 1.5 2 2.5 3 -1 -0.5 0 0.5 1 , , V/t L = 144 L = 3600, Sub. A Sub. B Sub. C (b) Figure 1.7: (a) Local dimer density ⟨ˆ ni⟩for different values of V/t with L = 60 × 60 plaquettes, β = 19.2, and ∆β = 0.02. (b) Sublattice dimer densities ⟨ˆ nA,B,C⟩as functions of V/t for L = 60 × 60 (solid lines) and L = 12 × 12 plaquettes (dashed lines), respectively. 1.5. SIMULATION RESULTS 23 -0.52 -0.5 -0.48 -0.46 -0.44 -0.42 -0.4 -0.38 -0.3 -0.25 -0.2 -0.15 -0.1 -0.05 0 / L V/t L = 36 sites 144 sites 1296 sites 11664 sites (a) 0.4 0.41 0.42 0.43 0.44 0.45 0.46 -0.3 -0.25 -0.2 -0.15 -0.1 -0.05 0 < 3> V/t L 36 144 1296 11664 (b) Figure 1.8: Zoom near the first order phase transition at (V/t)C for different lat-tice sizes with β = 19.2 and ∆β = 0.02: (a) While the ground-state energy density, ⟨ˆ HQDM⟩/L, is continuous near (V/t)C, (b), the density of flippable 3-plaquettes, ⟨ˆ ρ3⟩, displays a small but evident jump. -1.75 -1.5 -1.25 -1 -0.75 -0.5 -0.25 0 0.25 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 /(Lt) V/t E(0)Star E(2)Star EPlaq EVar, 6x6 81x81 Figure 1.9: Numerically computed energy density ⟨ˆ HQDM⟩/L for β = 19.2, ∆β = 0.02, and L = 81 × 81, compared to variational (section 1.6) and perturbative (appendix E) estimates. 24 CHAPTER 1. QUANTUM DIMER MODELS: RK MODEL 1.5.3 The plaquette phase ((V/t)C < V/t < 1) The plaquette phase is certainly more complex to describe than the star phase. The features of the ground state in this phase are to some extent captured by the “ideal” resonating plaquette state (fig. 1.1b), a product state such that all plaquettes of one of the three sublattices, say A, are in the benzene-type resonance state (| i⟩+ | i⟩) / √ 2. This would in principle lead to a three-fold degeneracy (the resonating plaquettes could just as well be located on sublattices B or C). In contrast to the star phase case, the ideal plaquette state is not an exact ground state for any V/t. The actual ground states can be viewed as the ideal plaquette state, dressed by a variable amount of flip excitations in the other two plaquette sublattices. For convenience, we use the Ising-spin notation. In this representation, the ideal plaquette state |ψplaq⟩can be denoted as (1.19) |ψplaq⟩= O i∈A |→⟩i O j∈B |↑⟩j O k∈C |↓⟩k , where |→⟩i denotes the ˆ σx i -eigenstate (|↑⟩i + |↓⟩i) / √ 2. The spins in sublattices B and C must be anti-parallel with respect to each other. In accordance with the numerical results, the RMS magnetization ⟨ˆ m2⟩1/2 also vanishes for the ideal plaquette state |ψplaq⟩in the thermodynamic limit. As P i ˆ σz i |ψplaq⟩= P i∈A ˆ σz i |ψplaq⟩, we have that (1.20) ⟨ψplaq| ˆ m2 |ψplaq⟩= ⟨ψplaq|  X i∈A ˆ σz i 2 |ψplaq⟩/L2 = X i∈A ⟨ψplaq| (ˆ σz i )2 |ψplaq⟩/L2 = 1 3L →0. The energy density for |ψplaq⟩can be computed easily and yields an upper bound to the exact ground state energy (appendix E). At V = 0, it takes for example the value −1/3 which is clearly above the value determined numerically through MC simulations (ECIM ≈−0.38, see fig. 1.8a) and through exact diagonalizations on small systems (−0.37 < ECIM < −0.366). One can improve |ψplaq⟩as a variational state by adding flip excitations in sublattices B and C (this is possible due the fact that 3-plaquettes occur in B and C with density 1/8), as we do in section 1.6. Energy gap in the plaquette phase: A finite energy gap for the plaquette phase was advocated in ref. with an indirect numerical confirmation based on the magne-tization for three different temperatures – a statement which disagreed with an earlier prediction in ref. . As we said in section section 1.4, it is possible to estimate ex-citation gaps on the basis of imaginary-time correlation functions, through the fitting of their exponential decay, dependent of the quantum temperature 1/β. Again, the computation itself is described in the appendix C. Figure 1.10 presents our results for different temperatures: starting from the star phase, the gap estimate decreases in a marked behavior around the first order phase transition at (V/t)C. Then, it increases again in the plaquette phase, and eventually goes to zero as we approach the RK point at V/t = 1. 1.5. SIMULATION RESULTS 25 0 0.2 0.4 0.6 0.8 1 1.2 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 E/t V/t 19.2 38.4 76.8 153.6 Figure 1.10: Estimates for the energy gap ∆E/t to the first excited state for different temperatures. The gaps were obtained from fits of imaginary-time auto-correlation functions ⟨ˆ σi(0)ˆ σi(iτ)⟩QIM, for a system with L = 36 × 36 sites. The results should be interpreted as upper bounds to the real gap, which are close the actual gap after convergence in β. The convergence of the curves for the different temperatures and the gap’s maximal value of roughly 0.7t around V/t ≈0.1 are a clear evidence for a finite gap in the plaquette phase. In the vicinity of the RK point, the curves for gap estimates are no longer converged with respect to the temperature. The reason is simply that, as the gap vanishes, temperatures would have to be reduced more and more to obtain the actual gap from the imaginary-time correlators. Also, fitting the correlation functions becomes more difficult as they ultimately change from an exponential to an algebraic decay. Further evidence for the finite gap is given by the temperature dependence of observables. When lowering the temperature, observables should converge, once the temperature is sufficiently below the gap. This is confirmed in fig. 1.4b. Let us look at further observables to better understand the plaquette phase. The normalized j-plaquette numbers ⟨ˆ ρi⟩are shown in fig. 1.6. They appear to be much more sensitive to variations in V/t than the RMS magnetization. As V/t increases, ⟨ˆ ρ3⟩ and ⟨ˆ ρ0⟩continuously decreases while ⟨ˆ ρ2⟩increases, and ⟨ˆ ρ1⟩stays almost constant, assuming its maximal value in the phase diagram. The constant and maximal value of ⟨ˆ ρ1⟩≈0.25 seems to be a characteristic signature for the plaquette phase. For the ideal plaquette state |ψplaq⟩, one obtains ρ = (1/12, 1/4, 1/4, 5/12). For no value of V/t do we find agreement with these values, showing once again the difference between the ideal and real plaquette states. On the other hand, the sublattice dimer densities (fig. 1.7b) follow closely what is expected from the ideal plaquette phase for V/t up to ∼0.7: one sublattice has nearly three dimers per site, and its resonating status is confirmed by the other two sublattices, which stay with ∼1.5 dimers per site. For V/t > 0.7, this order parameter shows strong fluctuations, which will be better 26 CHAPTER 1. QUANTUM DIMER MODELS: RK MODEL described in section 1.5.4. A contradictive argument for a gapless plaquette phase: In contrast to the findings described above and those of Ref. , Orland argues in Ref. that the quantum dimer model should have gapless excitations for V/t = 0. We believe that this is due to a mistake in his derivation. In Ref. , the model for V/t = 0 and a hexagonal lattice with open boundary conditions is mapped to a model of vertically fluctuating non-intersecting strings on a square lattice. Let Y = (Y1, . . . , Yℓ) with Yx+1 −Yx ∈{0, 1} denote the y-coordinates of such a string. First, one can obtain the ground state ν(Y ) of a single string which corresponds to the ground state of the XX model (energy density ǫ0 = −2t/π) and that of the quantum dimer model in the topological sector with flux quantum numbers (Wx, Wy) = (0, W max y −1), where W max y is the maximum possible flux for the y-direction. One can now add further strings, each reducing Wy by one. To construct an N-string ground state wavefunction, in Ref. , the product of vertically shifted single-string ground states is considered. To take account of the no-intersection constraint for the strings (Y (n) x ̸= Y (n′) x ∀x,n̸=n′), Orland then anti-symmetrizes the resulting wavefunction with respect to the string positions – specifically, first with respect to the variables (Y (n) 1 |n = 1, . . . , N), then with respect to (Y (n) 2 |n = 1, . . . , N), and so on. In analogy to the anti-symmetrization for fermions, he concludes that the resulting state has energy density Nǫ0 and is hence the N-string ground state. Generalizing the procedure to excited states, gapless excitations are found which simply corresponding to gapless excitations of a single string. We think that the described anti-symmetrization, also employed in Refs. [33, 34], is flawed. Different from the conventional anti-symmetrization for fermions, the resulting N-string wavefunction is not a sum of product states but contains also entangled states. Hence, the resulting state is not an energy eigenstate. As a simple example for the conventional anti-symmetrization, consider two non-interacting fermions in 2D space. The anti-symmetrization of a product wavefunction µ(x1, x2)ν(y1, y2) is µ(x1, x2)ν(y1, y2) −µ(y1, y2)ν(x1, x2). It has zero amplitude for (x1, x2) = (y1, y2) and has the same energy Eµ +Eν as the original state. For two strings and ℓ= 2, the anti-symmetrization as suggested in Ref. would lead to a different type of wavefunction, namely µ(x1, x2)ν(y1, y2) −µ(x1, y2)ν(y1, x2) −µ(y1, x2)ν(x1, y2) + µ(y1, y2)ν(x1, x2). While it is zero for intersecting strings (x1 = y1 or x2 = y2), the second and third components in the sum are not products of single-string states. Hence, the resulting energy is not simply Eµ + Eν. 1.5.4 From the plaquette phase to the RK point The behaviour of the sublattice dimer densities seen for 0.7 < V/t < 1, where we approach the RK point, deserves special attention. The current belief is that, for bipartite lattices, there is a continuous transition from the plaquette phase to the RK point, the latter being an isolated critical point. Some of our measured parameters, like the dimer densities (fig. 1.6), show indeed the expected smooth behavior. Nevertheless, the magnetization curves displayed in fig. 1.3 show a small bump before the RK point, and the sublattice dimer densities show large fluctuations in this interval. 1.5. SIMULATION RESULTS 27 The most natural explanation for this behavior are finite size effects, but it is also due to the vanishing of the gap in the vicinity of the RK point. Figure 1.10 clearly shows that, in this region, the gaps keep decreasing with the temperature, with no signs of a convergence to a single curve. This effect leads to an enhancement of the Monte Carlo simulations fluctuations and its critical slowing down. More precisely, the observed effects may be attributed to a ground state with an approximate U(1) symmetry in the vicinity of the RK point. The continuum version of the height representation of the quantum dimer model has U(1) symmetry and algebraically decaying correlations at the RK point V/t = 1. For V/t < 1, close to the RK point, there are two length scales, one beyond which dimer-dimer correlators show exponential decay signaling a crystalline order, and one beyond which one can observe the breaking of the U(1) symmetry. A linear system size in-between these two length scales corresponds to the crystalline U(1) regime . Such symmetry was found previously for the square lattice QDM [36, 37]. It is possible to visualize it for the hexagonal RK model using an order parameter that is related to the dimer histogram seen in ref. . To do this, we propose the following complex order parameter, defined on the 3D CIM lattice, (1.21) P = 1 L · N N X n=1 L X l=1 zl · δBn,l,0, where the sums are made over all the N stacks of the 3D lattice and over the L sites of each lattice. Bn,l = P i∈Nl σn i is the local magnetic field at the site l of the stack n, and it is equal to zero when exactly three of the site’s bounds are frustrated. δBn,l,0 is then equal to 1 if this site has three dimers, and zero otherwise. Finally, the weight zl depends on which sub-lattice the site l is found, with (1.22) zl =      1 , if l ∈{sub-lattice A} , exp (2πi/3) , if l ∈{sub-lattice B} , and exp (−2πi/3) , if l ∈{sub-lattice C} . For a state invariant under full hexagonal translations, the mean value of this pa-rameter should always be equal to zero, ⟨P⟩= 0, but its histogram on the complex plane should present points and peaks reflecting the state’s dimer structure. Fig-ure 1.11 shows what we should expect from P’s histogram for the (classical) 3-fold degenerated ideal star crystals and for the 3-fold degenerated ideal plaquette phases, defined by the tensor product in eq. (1.19). In the former case, P is equal to ei-ther exp (πi/3) /3, exp (−πi/3) /3 or −1/3, depending on which two sub-lattices the 3−plaquettes are located. In the classical limit V/t →−∞, the histogram will be composed by only these three points, and for the rest of the (quantum) star phase we have three distributions with around these points. For the plaquette phase, the dimers follow a binomial distribution , and we should expect three relatively large measure distributions near at the angles 0, 2π/3 and −2π/3, depending on which sub-lattice the resonating plaquettes are found. Figure 1.12 shows some preliminary results for a histogram of the complex parame-ter P inside the plaquette phase and near the RK point, for a system with L = 9×9 and 28 CHAPTER 1. QUANTUM DIMER MODELS: RK MODEL Figure 1.11: Expected positions of the peaks on the histogram of the complex param-eter P, for the ideal star and plaquette states. The peaks are marked by, respectivelly, the filled and the empty regions. Figure 1.12: (Preliminary results) Histogram of the complex parameter P for a system with L = 9×9, with β = 19.2 and ∆β = 0.02, and near the RK point. Each histogram was re-scaled by its maximum value, found inside the yellow regions. There are no points inside the black regions. 1.5. SIMULATION RESULTS 29 β = 19.2, ∆β = 0.02. The peaks of the histogram are marked by the yellow regions, while its minima are marked by the black regions. We can easily identify three peaks for V/t ≤0.5, positioned at the same angles as the ideal plaquette phase in fig. 1.11. The purple regions between the maxima probably correspond to measurements done while the MC algorithm went from one of the 3-fold degenerated plaquette states to another, and they should disappear for a high enough number of measurements and lower energies. For V/t = 0.7, we no longer have these three peaks, and instead the measurements of P are concentrated almost uniformly on a triangular-like region, with its vertices located at the positions of the plaquette peaks. This is in accordance with our results for the sub-lattice dimer densities, fig. 1.7b, which no longer identify a plaquette state. As V/t increases towards the RK point, this triangle becomes more uniform and transforms into a circular form, indicating the presence of a U(1) sym-metry. Still, we can yet identify a weak triangular structure at the RK point, and more simulations, with a lower temperature and a better temperature discretization, are needed to fully describe it. Preliminary results for lower temperatures and exact diagonalization tests also indicate that the region near the origin, which present no measurement on fig. 1.12, may be populated, and so further studies are needed. 1.5.5 The Rokhsar-Kivelson point (V/t = 1) The Rokhsar-Kivelson point is the only point of the phase diagram where the system does not display local order. It is also the only point, as far as we know, where we can determinate the ground state analytically. At this point, the Hamiltonian ˆ HQDM becomes ˆ HQDM,RK = −V X i (| i⟩⟨ i| + h.c.) + V X i (| i⟩⟨ i| + | i⟩⟨ i|) =V X i (| i⟩−| i⟩) · (⟨ i| −⟨ i|) , (1.23) which is a sum of projectors with eigenvalues 0 or 1. By consequence, the ground state energy is, by construction, zero, and the ground state will be annihilated by ˆ HQDM. Let us write the ground state of the RK point using the orthonormal base {|ψi⟩} of the Hilbert space formed by the classical dimer coverings, (1.24) |ΨQDM,RK⟩= X j aj |ψj⟩, and apply eq. (1.23) to it, ˆ HQDM,RK |ΨQDM,RK⟩=0 = −V X i (| i⟩⟨ i| + h.c.) X j aj |ψj⟩ + V X i (| i⟩⟨ i| + | i⟩⟨ i|) X j aj |ψj⟩ = X j aj  Nψj |ψj⟩− Nψj X k |ψk⟩  . 30 CHAPTER 1. QUANTUM DIMER MODELS: RK MODEL The first term is created by the potential term of the Hamiltonian, and Nψj is the number of neighbour configurations of the dimer covering |ψj⟩. The second term, created by the kinetic term, is a sum over all the neighbours of |ψj⟩. Each state |ψl⟩is created Nψl times by the kinetic term (one time for each one of its neighbours), so the only way to annihilate this state is to have the weight aj identical for all the base states (N is the total number of base states): (1.25) |ΨQDM,RK⟩= 1 √ N X j |ψj⟩. Notice that this deduction is completely independent of the chosen topological sector. By consequence, every single one of them contains a RK state with zero energy. This degeneracy does not pose a problem for our simulations, though, since by definition we cannot jump from one sector to another through the local transformations used by our algorithm. At the RK point, many physical properties, like dimer-dimer correlations, can be derived from the behavior of the classical dimer problem. See for instance ref. , where the relation between quantum dimer models at the RK point and their classical counterparts is discussed. In particular, diagonal operator expectation values amount to doing classical enumerations. We used such computations to benchmark the QMC simulations. 1.5.6 Staggered phase (1 < V/t < ∞) For V/t < 1, states with negative energy and flippable plaquettes are favoured either by the potential term (star phase) or by the kinetic term (plaquette phase). We have just seen that, at the RK point, the ground state energy is equal to zero. On the other hand, in the region 1 < V/t < ∞flippable plaquettes are strongly disfavoured. The Hamiltonian can be always rewritten as a sum over projectors with positive coefficients, (1.26) ˆ HQDM = t X i (| i⟩−| i⟩) · (⟨ i| −⟨ i|) + (V −t) X i (| i⟩⟨ i| + | i⟩⟨ i|) . In this region, both terms are positive for states with 3−plaquettes, and the ground state energy is non-negative. The only states that have zero (and thus minimal) energy in this region are the staggered states (one of which is displayed in fig. 1.1c), which contain only 2−plaquettes. These states are inside the topological sector with highest flux density, and due to the absence of flippable plaquettes, are topologically isolated from the rest of the configuration space: no flippable plaquettes means that no local flip operations are possible, and so these states are isolated even from each other, inside their flux sector. This also means that there is no sense in running MC simulations for these states, since they have no dynamics. Finally, notice that the staggered states are zero-energy eigenstates of ˆ HQDM not only for this region, but for all values of V/t. As discussed above, at the RK point all topological sectors contain (at least one) state of vanishing energy, while only the isolated ground states in the maximal flux sector persist for V/t > 1. 1.5. SIMULATION RESULTS 31 0 0.5 1 1.5 2 2.5 3 Figure 1.13: Ground states of the zero flux sector inside the staggered region: local dimer density ⟨ˆ ni⟩for different values of V/t with L = 81 × 81 plaquettes, β = 19.2, and ∆β = 0.02. 32 CHAPTER 1. QUANTUM DIMER MODELS: RK MODEL 3 3 1 1 Figure 1.14: Dimer covering over a 12×12 rectangular patch of the honeycomb lattice, minimizing the energy of the quantum dimer model inside the zero flux sector for V/t > 1. This dimer covering is composed of three staggered crystals (highlighted with different colors), two 1−plaquettes and two 3−plaquettes. The change of the topological sector results into a first order transition at the RK point, between the plaquette and the staggered phases. Order parameters such as the total number of j−plaquettes go from a non-zero value for all the ⟨ˆ ρj⟩to a ρ = (0, 0, 1, 0) density. Inside the zero flux sector, the Monte Carlo simulation emulates the staggered ground state under the constraints of the “wrong” topological sector, and the RK point corresponds to what seems to be a first order transition from the plaquette phase to states with a large majority of 2−plaquettes (⟨ˆ ρ2⟩> 0.8), vanishing ⟨ˆ ρ0⟩, and small values of ⟨ˆ ρ1⟩= ⟨ˆ ρ3⟩(see fig. 1.6). Figure 1.13 shows the local dimer density of the ground state in this region, for various values of V/t. At first, the ground states organizes itself as small, unaligned 2−plaquette clusters. The interfaces between two of these clusters are also formed by 2−plaquettes, and the few 1−and 3−plaquettes present are the corners where three of them enter in contact. The presence of unaligned staggered clusters (and thus of 1−and 3−plaquettes) guarantees that the flux stays equal to zero. It should be noted here that the integrated autocorrelation times for the ⟨ˆ ρj⟩for V/t > 1 is very high, and thus the results at this part of the phase diagram are at best qualitatively correct. Indeed, one can easily build for a rectangular patch with ℓ× ℓplaquettes a state inside the zero flux sector with only three staggered crystals, two 3−plaquettes and two 1−plaquettes (see fig. 1.14 for an example with L = 144). The 3−plaquette density of such a state is ⟨ˆ ρ3⟩= 2/ℓ, which is visibly lower than the one seen in fig. 1.6, and goes to zero in the thermodynamic limit. The energy of this state also goes to zero at this limit, and thus this state is a good candidate for the ground state of the zero flux sector for V/t > 1 The Monte Carlo algorithm do not manage to reproduce it in a reasonable time because we have an entropic barrier between the states shown on fig. 1.13 and the 1.6. VARIATIONAL TREATMENT 33 dimer covering of fig. 1.14. These states can be linked through a series of local trans-formations, but the “path” needed to follow inside the configuration space is so specific that the Monte Carlo algorithm is not capable to find it in a reasonable time interval with the few flippable plaquettes available. From fig. 1.13, it might seem that the MC simulation has a better convergence for V/t = 150, due to the smoother dimer den-sity. Unfortunately, this is not true. As V/t increases, the acceptance rate of adding a new spin to a cluster on the imaginary time direction drops considerably, resulting in almost single spin flips. This means that there is a weak correlation between the different layers of the 3D Ising lattice, and so the dimer density is an averaging of quasi-classical dimer coverings seen for the other values of V/t, and is not more nearer to the ground state than the other unconverged states. 1.6 Variational treatment Before finishing this chapter, let us supplement the Monte Carlos study with a vari-ational treatment. The main motivations are to find states that improve upon the ideal plaquette state proposed on eq. (1.19), to approximate the ground states in the plaquette phase, and to obtain further information on excitation gaps. The ideal plaquette state is a simple tensor product state with resonating 3−plaquettes on one of the three sublattices, say sublattice A, such that (1.27) |ψplaq⟩= O i∈A (| i⟩+ | i⟩) / √ 2. Recall that |ψplaq⟩is not an exact ground state for any value of V/t, and its energy expectation value yields an upper bound to the ground sate energy. The contribution of the kinetic terms is due to the resonating 3−plaquettes (with a density ⟨ˆ ρ3⟩= 1/3) and has the value −tL/3. The potential term contributes with a L/3 term, due again to the resonating 3−plaquettes, plus a 2 · (1/8) · (L/3) term for the sublattices B and C, which have a 3−plaquette density of 1/8 each. This leads us to (1.28) Eplaq = −L 3 t + L 3 + 2L 3 1 8  V = L  −1 3t + 5 12V  . For V = 0, this gives an energy of −t/3 per plaquette, which we recall is slightly above the numerically observed value for the MC simulations (ECIM ∼−0.38t) and through exact diagonalizations on small systems (−0.37 < ECIM < −0.366). Improving this variational energy is possible along several ways. A simple method is to decompose the lattice into cells as exemplified in fig. 1.15a and consider a tensor product (1.29) |Φ⟩= |φ⟩⊗|φ⟩⊗|φ⟩. . . of states |φ⟩defined on appropriately chosen subgraphs in each cell (bold edges in fig. 1.15a. We choose these subgraphs to contain all vertices of the A-hexagons in the cell and all edges connecting these vertices. The cell Hilbert space is spanned by all dimer coverings of the chosen subgraphs. This construction guarantees that indeed 34 CHAPTER 1. QUANTUM DIMER MODELS: RK MODEL every vertex of the full lattice is reached by exactly one dimer. The cell state |φ⟩is determined by minimizing the expectation value of the energy density ⟨Φ| ˆ HQDM |Φ⟩/L with respect to |φ⟩under the normalization constraint ∥φ∥= 1. For the minimiza-tion of the energy functional, which is generally a sixth order polynomial in the basis coefficients, we employed the limited-memory Broyden–Fletcher–Goldfarb–Shanno al-gorithm (L-BFGS, ), starting from several different initial states to find the global minimum. The simplest choice is the 3 × 1 cell depicted in fig. 1.15 which corresponds to considering states |φ⟩= a | ⟩A + b | ⟩A with a2 + b2 = 1. The energy functional −2tab + V (a2 + b2 + a6 + b6) is minimized by a = −1 6 q 18 −6 p 9 −4t2/V 2 for V/t < −2/3 and by a = 1/ √ 2 for V/t ≥−2/3, i.e., for V/t ≥−2/3, the solution is given by the ideal plaquette state eq. (1.19). This is reflected in the strong overlaps of the obtained variational state and the ideal plaquette and star states (respectively ⟨φ | φplaq⟩= 1 and ⟨φ | φstar⟩= 1/ √ 2, see fig. 1.15b), and the constant normalized numbers of j-plaquettes, which are identical to the ones found for the plaquette phase, ρ = (1/12, 1/4, 1/4, 5/12) (see fig. 1.15c). When increasing the cell size up to 6 × 6 rectangles or lozenges, more and more hexagons of the B and C lattices can be flipped, and thus the variational energy density decreases (see fig. 1.9) and observables such as the ⟨ˆ ρi⟩approach qualitatively the values observed in the Monte Carlo simulations. The variational results are similar in behavior to the MC simulations, but the star ↔plaquette transition is found at V/t ≈−0.5, which is quite far away from the result described in section 1.5.2, (V/t)C = −0.228 ± 0.002. The overlaps to the ideal star and plaquette states, displayed in fig. 1.15b, decay as the cell size increases. This is due to two effects: on the one hand, more and more corrections to the ideal states are taken into account, reducing the overlap, and on the other hand, there is a type of orthogonality catastrophe that is inevitable in the thermodynamic limit. Still, the overlap of the cell state |φ⟩with the two ideal states becomes sharper as the cell size increases, indicating a better differentiation of the star and plaquette phases. The variational treatment can also be used to obtain approximations to the energy and the excitation gap. Figure 1.9 shows the energy found for the 6 × 6, which agrees quite well with the MC results. To calculate the variational gap, we must first obtain the optimal cell state |φ⟩. Singling out a certain cell and fixing state |φ⟩on all other cells, we then compute an effective Hamiltonian (1.30) ⟨n| ˆ Hcell eff n′ := ⟨n| ⊗⟨φ| ⊗⟨φ| . . .  ˆ H n′ ⊗|φ⟩⊗|φ⟩. . .  for the cell. The gap between the ground state and the first excited state of ˆ Hcell eff is displayed in fig. 1.15d. As with the dimer densities ρ, the variational gap shows the same properties of the Monte Carlo computations (fig. 1.10): a local maximum of the gap inside the plaquette phase region, and a vanishing of the gap in the vicinity of the RK point, but with a critical point between the star and plaquette phases at V/t ≈−0.5. 1.6. VARIATIONAL TREATMENT 35 3x1 3x3 6x6, rhombus 6x6, rectangle 0 0.5 1 1.5 2 2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 EVar./t V/t 3x3 4x3 6x4 6x6 rhombus 6x6, rectangle 0 0.2 0.4 0.6 0.8 1 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 Overlap V/t < Star| > < Plaq.| > 3x1 3x3 6x4 6x6, rect. 0 0.2 0.4 0.6 0.8 1 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 < i> V/t < 1> < 2> < 3> < 4> MC 3x1 3x3 6x4 6x6, rect. Figure 1.15: Variational treatment where the energy expectation value for a cell prod-uct state |ψ⟩= |φ⟩⊗|φ⟩⊗|φ⟩. . . is minimized with respect to |φ⟩. (a) Examples for the employed rectangular and lozenge cell shapes. The considered basis states for each cell are all dimer coverings of the marked edges. (b) Overlap of the cell state |φ⟩ with the ideal star state |φstar⟩and the ideal plaquette state |φplaq⟩. (c) Normalized numbers of j-plaquettes, ⟨ˆ ρj⟩. (d) Local excitation gap as defined in the text. 36 CHAPTER 1. QUANTUM DIMER MODELS: RK MODEL Chapter 2 Quantum dimer models: V0 −V3 model In the previous chapter, we studied in details the quantum dimer model originally proposed by Rokhsar and Kivelson, which has a potential term proportional to the normalized number of 3−plaquettes, ρ3. In this chapter, we will present a generaliza-tion of this model, which we call V0−V3 model, with a new potential term proportional to the number of 0−plaquettes, as its name suggests . This new model contains the original RK model, and it presents a very interesting phase diagram, with several phase transitions between different flux sectors, which are by definition topologically disconnected. To study this phase diagram, we used an adapted version of the last chapter’s MC method, and also perturbation methods near the RK point. In particu-lar, the latter approach gives to us results compatible with the “Cantor deconfinement” scenario proposed by Fradkin et al. for the quantum dimer models. 2.1 Generalized quantum dimer model Hamiltonian: V0 −V3 model Let us consider the following generalized form of the quantum dimer model Hamilto-nian, with the kinetic term seen previously and four potential terms proportional to the (normalized) total number of j−plaquettes, (2.1) ˆ H = −t X i (| i⟩⟨ i| + h.c.) + 3 X j=0 Vj ˆ ρj. This equation supposes that the ρj’s are independent, but, as we said in section 1.4, the mean values of these operators follow two sum rules (we remind that L is the total number of plaquettes of a honeycomb lattice) ⟨ˆ ρ0⟩+ ⟨ˆ ρ1⟩+ ⟨ˆ ρ2⟩+ ⟨ˆ ρ3⟩= L, (2.2a) 2 ⟨ˆ ρ0⟩+ ⟨ˆ ρ1⟩−⟨ˆ ρ3⟩= 0, (2.2b) where the latter one is specific to the honeycomb lattice under periodic boundary conditions. The four potential terms are, then, redundant, and we can eliminate two 37 38 CHAPTER 2. QUANTUM DIMER MODELS: V0 −V3 MODEL of them. This Hamiltonian is reduced to the original RK model when V0 = V1 = V2 = 0 and V3 = V , while the models obtained for V0 = ±1 and V1 = V2 = V3 = 0 are relevant to an Ising string-net model. Considering this, we choose to keep the ρ0 and ρ3 terms, (2.3) ˆ HV0−V3 = −t X i (| i⟩⟨ i| + h.c.) + V0ˆ ρ0 + V3ˆ ρ3. Due to the new potential term, we will name this new quantum dimer model as the V0 −V3 model. As with the RK model, the scalar parameter t can be isolated, leaving us with the free parameters V3/t and V0/t. Also, since the case V0 = 0 reduces to the RK model, a RK point is present at V0 = 0 and V3 = t. The MC algorithm proposed in section 1.3.3 and detailed at the end of appendix B can still be used to study the V0 −V3 model. The new equivalent quantum Ising model Hamiltonian is (2.4) ˆ HQIM = −t X i ˆ σx i + Jz X ⟨i,j⟩ ˆ σz i ˆ σz j + V3 X i δ ˆ Bi,0 + V0 X i δ ˆ Bi,±6, where we remind that ˆ Bi is the local magnetic field on the site, which yields the value zero for the 3−plaquettes, and the value ±6 for the 0−plaquettes, depending on the sign of ˆ σz i . Since the added term only affects the potential term, the approximation by a classical 3D Ising model presented in section 1.3.2 stays unchanged. The acceptance rates of Monte Carlo algorithm with cluster updates must be slightly changed to take into account the V0 term, but the characterization seen in section 1.3.3 is still valid. The necessary changes are described at the end of appendix B. We will show in this chapter that this model presents a rich phase diagram on the (V3/t, V0/t) plane, with a whole range of closely-spaced phase transitions between different topological sectors, in accordance with the scenario of refs. [16, 44]. In this section, we will detail the behavior of the V0 −V3 model and how its Hilbert space is divided in topological sectors. Mainly, we will recall the notion of flux and flux density, which we use extensively here. Following this, we will detail a bit the caveats that must be considered when using the Monte Carlo algorithm presented in section 1.3.3 to this new model. A full description of the phase diagram will be given in sections 2.2 and 2.3.4. In section 2.2, we will present the classical limit, t →0, which gives to us the behavior of the model far away from the origin, and define the S and H chains, which will be useful to construct ansatz states for the full phase diagram. In section 2.3.4, we will fill the rest of the phase diagram using an adapted version of the cluster Quantum Monte Carlo presented in the previous chapter, and identify its four different regions. Among the latter, two deserve special attention. The zero flux region (section 2.3.2) contains most of the RK model seem in the previous chapter, but the presence of a new potential term adds some interesting dynamics. The most interesting region is the so-called fan region (section 2.3.4), where we see a series of transitions between different flux sectors, which can be described using ansatz states. We close the chapter with an analysis near the RK point, section 2.4, where we have a possible “Cantor deconfinement” mechanism, as proposed in ref. . For reasons explained below, it is difficult to study this region with only the MC simulations. We 2.1. GENERALIZED QDM HAMILTONIAN: V0 −V3 MODEL 39 -1 -1 +2 -1 +2 -1 -1 +1 +2 +2 -1 +1 -1 +2 -1 +2 -1 +1 Figure 2.1: Calculating the flux for a given dimer covering on the honeycomb lattice (in the case of this figure, a lattice with 36 plaquettes and ℓx = ℓy = 6). To the left, we have the definition of the oriented edges. The flippable plaquettes are marked by a grey shade. The flux can be different of zero only over a non-contractible curve, defining the flux quantum numbers Wx and Wy. The states presented are, from the left to the right, the star state (fy = 0), one of the staggered states in the fy = 2 sector, and the so-called S2 crystal (fy = 1/2). For all cases, fx = 0. will present, then, a perturbative analysis near this point, which nicely correlate with this mechanism as well as the fan region. 2.1.1 Flux and flux density In section 1.2, we did a brief description of the flux quantum numbers W, also known as winding numbers , and how to calculate them. Let us recall the definition of a flux passing through a closed oriented curve C, crossing the edges of the hexagonal lattice. Call A and B the two triangular sublattices of the hexagonal lattices, and orient the edges between them from A to B. The curves’ flux can be calculated by associating a weight 2 or −1 to each edge, depending on whenever it is covered by a dimer or not, and then multiplying each weight by ±1 according to the orientation of the edge (here, we use the convention that the weight is positive if the oriented edge points to the right of C, and negative otherwise). These weights were chosen in such a way that the flux of any contractible curve is equal to zero. Take for example a vertex of the hexagonal lattice, with its three edges, and a closed curve encircling it (left of fig. 2.1). By construction, one of the edges has a dimer (weight 2) and the other two are empty (weight −1 for each edge). Since all the three edges have the same orientation from the point of view of C, their multiplicative weight is the same and the total flux is equal to zero. The same idea is valid for any contractible curve, since it will enclose vertices with zero flux. This leaves us with the two distinct non-contractible curves on a torus, associated to the flux quantum numbers Wx and Wy. These numbers are invariant by local transformations, and thus can be used to index the disconnected topological sectors. For our study of the V0 −V3 model, we used various rectangular ℓx × ℓy sections of the hexagonal lattice, and so we are more interested in the flux densities, (2.5) fi = Wi/ℓi, 40 CHAPTER 2. QUANTUM DIMER MODELS: V0 −V3 MODEL Local ip loop Non-local ip loop Star crystal Star crystal Staggered crystal 1 ✁ip loop 3 ✁ip loops -1 +2 -1 +2 -1 -1 -1 +2 Figure 2.2: Flip loop operation for a dimer covering on the honeycomb lattice. The flippable plaquettes are marked by a grey shade. All contractible loops are associated to local operations, while non-contractible loop operations are non-local, changing the flux sector of the dimer covering (on the figure, from the f = 0 sector to the f = 0.5 and f = 2 sectors.) than the total flux. The value of fi can vary from −1 to 2, corresponding respectively to the curve C crossing only empty or occupied edges. Figure 2.1 shows how to cal-culate fi for a few examples of dimer coverings, with varying values of fy and fx = 0. The staggered states, which have only 2−plaquettes, are in the sectors with the max-imum and minimum y−flux density, fy = −1 and fy = 2. The star and plaquette states, seen in the previous chapter, are in the sector with fy = 0. While the different flux sectors are not connected by local transformations, it is nevertheless possible to go from one flux sector to another through non-local operations. Any operation on a dimer covering can be done by choosing a closed flippable loop, formed by alternating empty and covered edges, and exchanging the occupancy of these edges. If a flippable loop is contractible, then the corresponding transformation can be encoded as a series of local flips, and is thus a local operation. If the loop isn’t contractible, then such de-composition is impossible, and we have a non-local operation. This operation changes 2.1. GENERALIZED QDM HAMILTONIAN: V0 −V3 MODEL 41 Periodic Anti-periodic Figure 2.3: Change from a periodic boundary condition to a anti-periodic one after a non-contractible flip loop operation on a dimer covering on the honeycomb lattice. a flux quantum number by a value 3 for each time that the corresponding loop crosses the associated periodic boundary, due to the exchange of a dimer with weight ±1 by a dimer weighting ∓2 (see fig. 2.2). A priori, to map the whole phase diagram of the V0 −V3 model, we must run our MC algorithm for various values of fx and fy, with −1 ≤fi ≤2, which translates into a considerable amount of simulations. After analyzing the results of the classical limit t →0, which will be presented in section 2.2.1, we chose to restrict our simulations to fx = 0. As we will there, the phase diagram is dominated by the sector fx = fy = 0 for negative V3/t and V0/t, and by the sector fy = 2, fx = 0 for V3/t positive. This leaves only the region (V3/t < 1, V0/t > 0) open to other flux sectors. Inside this region, the ˆ ρ0 term of the V0 −V3 Hamiltonian is repulsive, and so the ground states must minimize the number of 0−plaquettes. The states that fulfill this condition more efficiently have a stripe or chain-like structure, shown on fig. 2.15, with regions rich in 3−plaquettes separated by staggered regions, since they will only present 0−plaquettes if the size of the staggered domains is too small. These states break the rotational symmetry, and can be oriented along the x direction in such a way that fx is always equal to zero. This reminds us of the behavior found in ref. for a quantum dimer model on the square lattice, where the presence of repulsive and attractive terms in the Hamiltonian lead to domain wall states. For reference, for a small system with L = 48 plaquettes, the phase diagram of the V0 −V3 model, calculated through exact diagonalizations, only contained flux sectors where one of the flux densities is equal to zero. From here onward, we will only refer to one of the flux densities, fy = f. 2.1.2 Caveats of the adapted MC algorithm Some specific points must be considered when using the adapted Monte Carlo algorithm on the V0 −V3 model, due to the presence of the topological sectors. The phase transitions between the different flux density sectors can be determined by comparing, 42 CHAPTER 2. QUANTUM DIMER MODELS: V0 −V3 MODEL for each point (V3, V0), the energies of each pertinent flux sector. Since the kinetic term of eq. (2.4) stays unchanged when compared to the RK model, the energy is still given by eq. (1.13), but using the new potential energy. The fact that the flux sectors cannot be connected through local transformations have a few consequences. On one hand, this forces us to do one new batch of MC simulations for each flux sector, increasing considerably the calculation time. On the other hand, this means that low gap effects between different flux sectors do not play a role here, although we can still have situations similar to the one described in section 1.5.4 for the original quantum dimer model, where we found that the gap goes to zero near the RK point in the zero flux sector. Also, following eq. (2.5), there is a limited number of flux densities accessible by a ℓx × ℓy rectangular section of the honeycomb lattice. We have seen that the minimal difference between two quantum numbers Wi and Wj is equal to three, which should give to us a flux density resolution equal to 3/ℓy. Unfortunately, these simplest non-local operations force the use of anti-periodic boundary conditions for the classical 3D Ising lattice (fig. 2.3). While it is possible to use such boundary conditions, they do complicate a bit the implementation of the order parameters. Due to this, we chose to use a flux density resolution equal to ∆f = 6/ℓy. With this, the fluxes accessible by a honeycomb lattice of dimensions ℓx × ℓy are given by (2.6) f = 6k ℓy = k · ∆f, 0 ≤f ≤2, where k is an integer. As we said before, only rational fluxes are accessible by finite size lattices. Equation (2.6) imposes that, to raise the flux resolution by a factor X, at least one of the lattice’s lengths must be increased by the same factor, which in the most naive estimation increases the simulation time by this same factor. Fortunately, the finite-size effects on all observables are found limited for big enough systems, and thus, if needed, we can use two lattices with different lengths (and values of ∆f) to better explore the flux sectors. The phase transitions between the different flux sectors can be obtained by compar-ing, for each point (V3/t, V0/t), the ground state energy of each sector, and pinpointing the energy crossings between them. Figure 2.4 shows an example of such a procedure for V3/t = −0.75 and a series of values of V0/t, with L = 3600, β = 9.6 and ∆β = 0.01. We see on it a jump of the ground state flux near V0/t = 3, from f = 0 to 0.5 (and seemingly ignoring all the sectors in-between), rapidly followed by a transition to f = 0.6. Finally, at V0/t = 4.5, we have a transition to the f = 0.7 sector, where the ground state stays up to very large values of V0/t, as far as we could determinate through the simulations. To be sure that the adapted MC method works correctly, we ran simulations for a small lattice, with only 48 plaquettes. This system contains only five flux sectors, namely f ∈{0, 0.5, 1, 1.5, 2}, and is too small to ignore the finite size effects on the energy, but is small enough to allow the usage of more precise values of the temperature parameters (in this case, β = 19.2 and ∆β = 0.005), and a comparison to exact diagonalization results. Both phase diagrams are presented on fig. 2.5, which shows a good accordance between these two methods, and thus the validity of our adapted MC algorithm. 2.1. GENERALIZED QDM HAMILTONIAN: V0 −V3 MODEL 43 -0.85 -0.8 -0.75 -0.7 -0.65 -0.6 -0.55 -0.5 -0.45 -0.4 -1 0 1 2 3 4 /L V0/t f 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Figure 2.4: Energies for various flux densitiy sectors, obtained through MC simulations for L = 3600, β = 9.6, ∆β = 0.01. V3 is constant and equal to −0.75, while V0 varies from −1 to 4.5. The inset shows the crossings between the energy curves in more details. -1 0 1 2 3 4 5 -1.5 -1 -0.5 0 0.5 1 1.5 Figure 2.5: Comparison between the exact diagonalization and the Monte Carlo phase diagrams for a honeycomb lattice with L = 48 plaquettes. The points indicate the results for the ED, while the lines are the interfaces between the phases, obtained through MC. The RK point is denoted by the crossing of the three interfaces (marked by a red circle). β = 19.2 and ∆β = 0.005. 44 CHAPTER 2. QUANTUM DIMER MODELS: V0 −V3 MODEL 2.2 Phase diagram Before we start to describe the phase diagram of V0 −V3 model, it is useful to consider its classical limit and define some elements that will aid us in this description. We will start with the classical limit, where t →0. As the name suggests, this limit will allow us to study this model when the quantum effects are negligible. We will follow this with a description of the S and H chains, elements that can be used to build ansatz states describing the model’s various phases. 2.2.1 Classical limit Let us study here the ground states of the classical limit t →0. Under it, the Hamil-tonian is reduced to its diagonal potential terms, (2.7) ˆ HV0−V3,t→0 = V0ˆ ρ0 + V3ˆ ρ3. As the name of the limit implies, the ground states of this new Hamiltonian are classical states, which can be easily identified by their set of total number of j−plaquettes, ρ = (ρ0, ρ1, ρ2, ρ3). Equation (2.7) is analogous to studying ˆ HV0−V3 in polar coordinates (r, θ), for the limit to r = p V 2 3 + V 2 0 →∞. To find the ground states, we must minimize the energy over the surface defined by the sum rules (eqs. (2.2a) and (2.2b)). This surface can be represented as a triangle in the (ρ0, ρ1, ρ3) space, shown on fig. 2.6, with vertices O = (0, 0, 0), A = (1/3, 0, 2/3) and B = (0, 1/2, 1/2), and with a generic point P parametrized as (2.8) P = r s 3, 1 −s 2 , 1 2 + s 6  , with s, r ∈[0, 1] . Inserting these relations into eq. (2.7), together with the parametrization V0/t = sin(θ) and V3/t = cos(θ), we find the energy E(θ) = V0ρ0 + V3ρ3 = sin(θ) · sr 3 + r · cos(θ) 1 2 + s 6  = r s 3  sin(θ) + cos(θ) 2  + 1 2 cos(θ)  . (2.9) Let us now minimize E(θ) in term of r and s. The sign of  sin(θ) + cos(θ) 2  determinates whenever s = 0 or s = 1, and the resulting energy is respectively (in both cases, r = 1) E(θ)|s=0,r=1 = 1 2 cos(θ), and (2.10a) E(θ)|s=1,r=1 = 1 3  sin(θ) + cos(θ) 2  + 1 2 cos(θ). (2.10b) By analysing where each function minimizes the energy, we find three distinct regions (fig. 2.6), bounded by the angles π/2, θ1 = arctan(−2) ≃−63.43° and θ2 = π/2 −θ1 ≃ 153.43°, with the angles θ1 and θ2 corresponding, respectively, to the lines V0 = −V3/2 and V0 = −2 · V3: 2.2. PHASE DIAGRAM 45 Figure 2.6: Triangular surface in the (ρ0, ρ1, ρ3) space associated to the sum rules eqs. (2.2a) and (2.2b), and phase diagram of the V0 −V3 model in the classical limit, t ←0, with its three phases. θ1 = arctan(−2) ≃−63.43° and θ2 = π/2 −θ1 ≃153.43°. The colors in the phase diagram follow the same scheme as in fig. 2.5 a) θ ∈[θ1, π/2]: in this region, both E(θ)|s=0,r=1 and E(θ)|s=1,r=1 are positive, and the energy is minimized for states with r = 0 and zero energy, leading to ρ = (0, 0, 1, 0). This corresponds to the staggered states present inside the maximum flux sector, f = 2; b) θ ∈[π/2, θ2]: the energy is minimized by the states with s = 0, resulting in ρ = (0, 1/2, 0, 1/2). The states following this j−plaquette distribution are inside the f = 1/2 sector and are the 12-fold degenerate state (twice due to the translation symmetry, twice due to the reflective symmetry, and three times due to the rotation symmetry) classical S2 crystals. We will explain the meaning of this notation in section 2.2.2. They are characterized by alternating zig-zag chains composed by 3−plaquettes or 1−plaquettes (rightmost dimer covering of fig. 2.1). c) θ ∈[θ1, θ2]: the states with s = 1 minimize the energy, and the resulting ground states are the 3-fold degenerate star crystal seen in the RK model, with f = 0 and ρ = (1/3, 0, 0, 2/3). Each one of the ground states described above correspond to a vertex of the trian-gular surface defined by the sum rules, respectively O (staggered crystal), B (classical S2 crystal) and A (classical star crystal). At the interfaces between these regions, the energy is also minimized by the dimer coverings found on the edge linking the corresponding vertices. Let us describe them in more detail: a) θ = π/2, OB edge: the energy is zero (and thus minimized) not only for the stag-gered states, but for any state that has s = 0 and r ∈[0, 1]. The states spanned by these parameters are characterized by the j−plaquette set (0, r/2, 1 −r, r/2) - i.e. all states with no 0−plaquettes. We believe that such states can be found in all flux sectors; b) θ = θ1, OA edge: the energy also vanishes for s = 1 and r ∈[0, 1], which originates the states with ρ = (r/3, 0, 1−r, r·2/3). As with the previous interface, we believe that these states can be found in all flux sectors. 46 CHAPTER 2. QUANTUM DIMER MODELS: V0 −V3 MODEL Figure 2.7: Spin an dimer structures of the classical S chain and of the ideal H chain (respectivelly, left and right) used to build the variational states. c) θ = θ2, AB edge : the energy is minimized for s ∈[0, 1] and r = 1, corresponding to ρ = (s/3, (1 −s)/2, 0, 1/2 + s/6). Such configurations can be found at least in the sectors with density f ∈[0, 1/2]; It is interesting to note that there is a kind of “dual” relation between the triangle inside the (ρ0, ρ1, ρ3) space and the regions on the (V3, V0) plane: the vertices of the triangle map into the three regions on this plane, while the edges are mapped to the interfaces between them. Let us close this section with a final remark about the classical limit. While it gives a good idea of the phase diagram when the potential terms are dominant, one must be careful when comparing its results with the MC simulations, mainly around the angle θ = π/2. For example, both the locations and the fluxes of the three regions studied in this section are compatible with the results that will be presented on section 2.3.4, or even for a 48 plaquette system, seen in fig. 2.5. Still, we do not find any signs of a f = 1 sector between the f = 2 and f = 1/2 sectors for the classical limit. As we will see in the next subsections, the V0 −V3 model favors states with no 0−plaquettes in the high V0/t and small V3/t region, dominated by the kinetic term. Thus, they are described by the (non-classical) V0/t →∞, t ̸= 0 limit, and cannot be described correctly by the t →0 limit. 2.2.2 S and H chains The local dimer densities found for the V0 −V3 model, seen on fig. 2.15, have an interesting stripe structure, and they can be described using ansatz states formed with two different types of chain structures, separated by staggered domains with dimers parallel to the horizontal direction on the figure. Let us define these chains and some of their properties here. The first chain type, which we will call the H chains, can be described as a series of second neighboring quasi-resonating plaquettes, with a local dimer density ⟨ˆ ni⟩> 2, separated by plaquettes with a considerably lower dimer density. The ideal H chain is akin to the ideal plaquette state, studied previously for the RK model. In the second type, called the S chains, all the plaquettes form a zig-zag-like structure and have a similar dimer density ⟨ˆ ni⟩, which is also higher than two, indicating that there is a high percentage of 3−plaquettes on the chain. While the H chains are a purely quantum state, the S chains have a classical limit, where all the chain plaquettes have exactly 3 dimers. Figure 2.7 shows the ideal dimer configurations of the H chain and the classical S chain, and their equivalent spin configurations. 2.2. PHASE DIAGRAM 47 The difference between the classical and quantum S chains is akin to the difference between the classical star state seen in the RK model for V/t →−∞and the quantum star state seen on the rest of the star phase and it is important to distinguish well between them due to their different nature and contributions on the ansatz states. In the context of the V0 −V3 Hamiltonian, the classical S chains will be seen mainly in ansatz states with V3/t ≪0 and/or V0/t ≫0, where we have a very low kinetic energy when compared to the V3 potential energy. The quantum S chains will be seen for smaller absolute values of V3/t, where the quantum fluctuations are more important. Isolated quantum S chains, that are not in direct contact with other S chains inside the ansatz states, can be studied using the Hilbert space of the anyonic Fibonacci chains [46, 47]. From here onward, whenever we use the term “S chain” we will be referring to the quantum variety, unless stated otherwise. The S and H chains contain most of the 3−plaquettes of a given state, with 0− plaquettes potentially appearing at their interface. Furthermore, the mean distance d between the chains (and hence the chain density) is linked to the flux density f, allowing us to better classify the ansatz states. Consider the staggered state of the f = 2 sector on a torus: all of the dimers crossed by a non-contractible curve are aligned on the same direction, contributing each with a weight equal to 2 and resulting into a flux quantum number Wy = 2 · ℓy. Inserting a classical S chain in this state will remove a dimer from a edge crossed by the non-contractible curve, exchanging its weight to −1, and thus reducing the total flux by 3 (see the lower part of fig. 2.2). Inserting either a quantum S or a H chain will have the same effect. For a state with nC chains, separated by staggered regions parallel to the x direction (horizontal direction on the figures of this chapter), we have then Wy = 2 · ℓy −3 · nC f = Wy ℓy = 2 −3nC ℓy . The distance d, measured in units of the distance between nearest plaquette centers, is equal to ℓy/nC, and thus (2.11) f = 2 −3 d d = 3 2 −f . We will classify the states composed only by S (H) chains, separated by a mean distance d, as Sd (Hd) crystals. We have already seen, in the previous chapter, the S1.5 and the H1.5 crystals, which are respectively the ideal star and plaquette states found in the f = 0 sector, as we should expect from eq. (2.11), and the S2 crystal, found for f = 1/2. In the following section, we will see other various domain wall crystals, including the H2.5 crystals, found inside the f = 0.8 sector. It will be important during the study of the phase diagram to know the minimal distance between two chains such that no 0−plaquettes are present between them. These distances are, in a decreasing order, equal to (2.12) dS−S = 3, dS−H = 2.75, dH−H = 2.5, dCl.−H = 2.25, and dCl.−Cl. = 2, where the subscripts indicate which chains we are comparing, with the “Cl.” subscript corresponds to the classic S chain. The distances are represented on fig. 2.8, where 48 CHAPTER 2. QUANTUM DIMER MODELS: V0 −V3 MODEL Figure 2.8: Distances between the S and H chains such that there are no 0−plaquettes between the chains. Only the plaquettes of the quantum S chains and the H chains, marked with red dimers and grey plaquettes, can be flipped if one adds the constraint that no 0−plaquettes are created by this operation. Figure 2.9: Staggered state of the f = −1 flux sector, formed by S spin chains with a inter-chain distance d = 1. one must remember that the quantum S and H chains have a kinetic energy, and thus plaquette flips must be taken into account when determining the minimal distances involving these chains, while the classic chains are essentially “frozen”. Also, there is no dCl.−S distance since the classic S chains are a limit case of the quantum S chains. For the staggered states with f = 2, eq. (2.11) tells us that the distance diverges (correctly) to +∞. The minimal distance between two chains, while keeping the ref-erence dimer structure of fig. 2.7 is equal to 1.5, corresponding to the f = 0 sector. Reducing the distance even further would break the condition of only one dimer per site of the classical states if we keep this reference dimer structure, but it is possible if we only consider the spin structure presented on fig. 2.7 for the classical S chains. In this case, the minimum distance between two S spin chains is equal to d = 1, corresponding to the minimum flux f = −1, and the resulting dimer covering is a staggered state with zig-zag dimers lines and no dimers parallel to the horizontal direction (fig. 2.9). Increasing the mean distance between the spin chains will increase the (still negative) flux density, and will allow enough space to build the original classical S chains - up to d = 1.5, where, again, we are in the f = 0 sector and we can return to the reference dimer structure of fig. 2.7. The dynamics of the f < 0 and the f > 0 flux sectors are thus quite distinct, due to the different structure of the staggered domains. We did 2.3. REGIONS OF THE QUANTUM PHASE DIAGRAM 49 0 1 2 3 4 5 6 7 8 9 1 3 5 7 9 0 2 4 6 8 S 0 1 2 3 4 5 6 1 3 5 0 2 4 6 2 2 3 3 4 4 0 -1 -1 0 1 1 1 1 0 0 -1 -1 0 -1 -1 0 1 1 0 0 S Figure 2.10: Staggered state of the f = −1 flux sector, formed by S spin chains with a inter-chain distance d = 1. some simulations for the f < 0 flux sectors, but the ground state was never located inside one of them. Because of this, and because of the results found through the perturbative analysis near the RK point (section 2.4), we decided to restrict most of our simulations to the f ≥0 flux sectors. Before we pass to the quantum phase diagram, let us present briefly the interpreta-tion of the flux and the S and H chains in terms of the standard height representation [48, 27, 49, 50, 51]. In the context of dimer coverings on a honeycomb lattice, the height model corresponds to a diamond covering of a triangular lattice (fig. 2.10), with each dimer covering presenting a surface slope, or tilt, which is directly proportional to the flux of a given state. Consider again a staggered state of the f = 2 sector. In the standard height notation, this state is a simple, tilted (1, 0, 0) plan. On the figure, the tilt can be visualized by the decreasing sequence of heights on the vertical direction. Adding a classical S chain to this state will reduce its flux, as we have just seen, but it will also add a step to the height notation, sending the state away from the (1, 0, 0) plan, and reducing the slope. We can add the classical S chains until we reach the f = 0 flux sector, associated to the “flat” (1, 1, 1) plane. The same operation can be done using the quantum S and H chains instead, with the same results. The only difference is that the heights around the steps fluctuate locally, due to the quantum fluctuations. This interpretation of the S and H chains will be important in chapter 3, where we study a model akin to the standard height model. Finally, it should be noted that, while we defined the flux density as a rational number, this is due to the necessity of limiting the system size to a value L to apply numerical methods such as MC or ED, and in the thermodynamic limit irrational fluxes are also allowed. 2.3 Regions of the quantum phase diagram We will now pass to a more detailed representation of the phase diagram, using the adapted MC algorithm. Taking into account the need of a fine enough flux density discretization, we decided to use lattices with dimensions equal to 60 × 60, with a ∆f = 0.1. We have seen in the previous chapter that these systems are big enough to give a good measurement of the mean energy of the ground state, and this choice 50 CHAPTER 2. QUANTUM DIMER MODELS: V0 −V3 MODEL of ∆f gives to us access to all the flux sectors found at the classical limit. For the temperature parameters of the MC simulations, we chose the values β = 9.6 and ∆β = 0.01, after comparing the results obtained through several simulations (in a similar fashion to what was done in with fig. 1.4) and considering the extra time needed to do the simulations over all the flux sectors. Unless stated otherwise, these size and temperature parameters were used for all our simulations. Figure 2.11 shows the phase diagram obtained through these simulations, with the coordinates (V3/t, V0/t) centred at the RK point, (1, 0). We have again the three regions seen in section 2.2.1, with f = 0, 1/2 and 2, and a fourth new one appearing between the f = 2 and f = 1/2 regions, presenting what seems to be a cascade of flux sector transitions. The RK point is, again, one of the only points of the phase diagram that can be described analytically. It is shared by all the flux sectors and is infinitely degenerate. For any value of f, it is possible to build one, and only one, ground state with vanishing energy at this point, following the eq. (1.25), (2.13) Ψf QDM,RK E = 1 pNf X j ψf j E , where { ψf j E } is the classical orthonormal base of the Hilbert space, restricted over a single flux sector f. Now, without further ado, let us describe these four regions. 2.3.1 f = 2 region: staggered phase Any state with at least one 3−or 0−plaquettes will have a positive energy for V3/t > 1 and V0/t ≥0, and the same can be said for V0/t < 0 if V3/t is large enough. Under this condition, the ground states are the staggered crystals of the f = 2 and f = −1 flux sectors, which are composed only by 2−plaquettes and have an energy equal to zero for any values of Vi/t. A large part of the right half-plane is, then, covered by a staggered phase. The interface with the f = 0 flux sector, on the lower right quadrant, is determined by whenever the negative ˆ ρ0 term and the kinetic energy can compensate the positive ˆ ρ3 term, and its asymptote’s angle tends to the classical limit θ1. The interface with the fan region is marked by the half-line (V3/t = 1, V0/t > 0). Notice that, while the RK point is shared by all the flux density sectors, this half line belongs only to the f = 2 sector. 2.3.2 f = 0 region: star and plaquette phases On the lower left corner of the phase diagram, the ground state is inside the f = 0 flux sector, containing the star and plaquette phases studied in the previous chapter. As with the RK model, the plaquette phase is found near the RK point, while the star phase is adiabatically connected to the 3-fold degenerate star crystal found at the classical limit. As we have seen before, this crystal maximizes the number of 3−and 0−plaquettes, with ⟨ˆ ρ3⟩= 2/3 and ⟨ˆ ρ0⟩= 1/3, explaining why it is the state that dominates the (V3/t < 0, V0/t < 0) quadrant. The interface of this region with the staggered one, on the lower right quadrant, follow the results of the classical limit, tending asymptotically to the limit V0/t = −2V3/t, beyond which the energy of the star crystal becomes positive. The same thing happens on the other side of 2.3. REGIONS OF THE QUANTUM PHASE DIAGRAM 51 Figure 2.11: Phase diagram of the V0 −V3 model. The axes are centered at the RK point, (V3, V0) = (1, 0), not at the origin. There is a total of four different regions, separeted by the thicker lines: a) the staggered sector, with maximum flux density f = 2; b) the f = 0 region, containing the star and plaquette phases, found in the original RK model; c) the f = 1/2 region, with the S2 crystal ground state; and d) the flux density fan, with f varying from 1/2 to 2. The thin lines inside the fan region indicate the transitions from a flux density f to f + 0.1. The star to plaquette phase transition is market with a dashed line, and it becomes specially hard to pinpoint when it is near the RK point. The dots represent the results of the MC used to build this diagram. The dimer configuration and local dimer density of the “ideal” state is represented for most phases (for the fan region, see fig. 2.15). The classical limit asymptotes (section 2.2.1) for the f = 0 interface are market by the black lines, and the angles found with the perturbative analysis near the RK point (section 2.4) are represeted in the inset. 52 CHAPTER 2. QUANTUM DIMER MODELS: V0 −V3 MODEL 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0 1/2 V0/t V3/t 0.0 0.3 0.7 0.8 1.0 (a) 0 0.5 1 1.5 2 2.5 3 -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0 , , V0/t V3/t = 0.0 Sub. A Sub. B Sub. C (b) 0 0.5 1 1.5 2 2.5 3 -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0 , , V0/t V3/t = 0.3 (c) 0 0.5 1 1.5 2 2.5 3 -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0 , , V0/t V3/t = 0.7 (d) 0 0.5 1 1.5 2 2.5 3 -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0 , , V0/t V3/t = 0.8 (e) 0 0.5 1 1.5 2 2.5 3 -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0 , , V0/t V3/t = 1.0 (f) Figure 2.12: Locating the transition between the star and plaquette phases for the V0 −V3 model: (a) root-mean square magnetization ˆ m2 1/2 and (b-e) sublattice dimer densities ⟨ˆ nA,B,C⟩as a function of V0/t, and for different values of V3/t. In all cases, β = 19.2 and ∆β = 0.02. 2.3. REGIONS OF THE QUANTUM PHASE DIAGRAM 53 the phase diagram, where the interface with the f = 1/2 region tends to the line V0/t = −1/2 · V3/t, above which it is more advantageous to have a S2 crystal, which maximizes ⟨ˆ ρ3⟩under the constraint of having no 0−plaquettes, than having the star crystal. The internal phase transitions in the f = 0 sector deserve special attention. For the RK model, which corresponds to the line V0 = 0, we have found a first order transition point between the star and plaquette phases at V3/t ≈−0.228, marked by a sharp drop of the RMS magnetization. We can still use this order parameter, together with the sublattice densities ⟨ˆ nA⟩, ⟨ˆ nB⟩and ⟨ˆ nC⟩, to find the star-plaquette transition in the V0 −V3 model, as we did in section 1.5.2. We did a series of simulations for the f = 0 flux sector, with the same temperature parameters as the ones used for the RK model (β = 19.2 and ∆β = 0.02), to determinate the star-plaquette phase transition of the V0 −V3 model. The results are represented in fig. 2.11 as a dashed line. Notice that this line is incomplete, namely we did not represent the star-plaquette transition for V3/t > 0.7. In the previous chapter, we have also seen that the gap of the first excited state (in the same flux sector!) becomes too small near the RK point (see fig. 1.10). This is associated to an increased correlation time (measured by the integrated auto-correlation, section 1.4) for the Monte Carlo simulations, making it difficult to determinate exactly the ground state at this area. Again, this same effect is also present in the V0 −V3 model. As we increase V3/t, the star-plaquette transition approaches the area near the RK point, and there is a crossover of this transition with the low gap region. This reflects on the RMS magnetization as a decrease of the drop amplitude (fig. 2.12a), up to a point where it becomes so small that it is impossible to differentiate it from the fluctuations due to the long correlation times. A similar effect appears for the sublattice dimer densities, ⟨ˆ nA,B,C⟩(figs. 2.12b to 2.12f): the sharp shifts of these order parameters becomes duller for V3/t > 0.7 and at the same time the difference between the two density “levels” becomes smaller. Towards the “planar trigone” states: While studying the f = 0 sector, we also found some interesting new ground states. These states appears for relatively high positive values of V0/t and for negative values of V3/t (fig. 2.13), always beyond the interface between the f = 0 and the f = 1/2 interface – thus they are never a ground state of the full V0 −V3 model. They are an attempt of the f = 0 sector ground state to minimize the energy by reducing the number of 0−plaquettes, while maintaining a high number of 3−plaquettes (favoured by the negative value of V3/t). To do so, the original star crystal is broken into triangular star-like clusters, with boundaries composed by 1−plaquette lines and 2−plaquette corners (see fig. 2.13a for the corresponding local dimer densities). As V0/t increases, the number of clusters increases, while their medium size diminish, in such a way to reduce even further the number of 0−plaquettes. The final configurations (lower right corner of fig. 2.13a) are formed by planar trigones of four 3−plaquettes, which is the largest triangular star cluster one can build without using 0−plaquettes, and are characterized by the dimer densities ρ = (0, 4/9, 1/9, 4/9). Since we can shift a line of planar trigones without changing the energy, these states (which we will call the planar trigone states) are infinitely degenerate in the thermodynamic limit. It should be noted that the transition towards the planar trigone states happens 54 CHAPTER 2. QUANTUM DIMER MODELS: V0 −V3 MODEL 0 0.5 1 1.5 2 2.5 3 (a) (b) Figure 2.13: Trigone states: local dimer density ⟨ˆ ni⟩of the ground state in the f = 0 sector, for various values of V0/t and for (a) V3/t = −4 and (b) V3/t = −2. Notice that, in the former case, the local dimer density presents some isolated resonating defects for V0/t > 4. 2.3. REGIONS OF THE QUANTUM PHASE DIAGRAM 55 0 0.2 0.4 0.6 0.8 1 0 1 2 3 4 5 0 1/9 4/9 < i> V0/t Star Triang. V3/t = -4 0 1 2 3 (a) 0 0.2 0.4 0.6 0.8 1 0 1 2 3 4 5 0 1/9 4/9 < i> V0/t Star Plaq. Triang. V3/t = -2 0 1 2 3 (b) Figure 2.14: Trigone states: normalized numbers of j-plaquettes, ⟨ˆ ρj⟩, of the ground state in the f = 0 sector, for various values of V0/t and for (a) V3/t = −4 and (b) V3/t = −2. through gradual transformations of the star crystal, and so it is not a first-order phase transition. Also, this new region reminds us of the behavior seen in section 1.5.6. The planar trigone states are an attempt of the f = 0 to emulate the ground state of another flux sector (namely, the S2 crystal), and the autocorrelation times of the simulations done in this region, for this flux sector, are relatively high. Still, they are nowhere as high as the ones seen for the staggered states, where the ground state (fig. 1.14) is less degenerate and harder to attain by the MC algorithm. Because of all this, the observables of these states should be interpreted only qualitatively. The RMS magnetization, used previously to study the star-plaquette transition, is not a suitable observable to describe the transition to the planar trigone states. This happens because two star domains, separated by a 1−plaquette line, must have oppos-ing spin signs in the 3D Ising model representations to respect the dimer constraints, an effect akin to the one seen for non-local flip operations (fig. 2.3). The local dimer density ⟨ˆ ni⟩is still useful to describe qualitatively these new states, as we have seen in the precedent figures, and the same can be said for the number of j−plaquettes ⟨ˆ ρi⟩. The curves of the ⟨ˆ ρi⟩’s for V3/t = −4 (fig. 2.14a) evolve slowly from the star to the planar trigone states. This, together with the localized transformations seen in fig. 2.13a, indicates again that there is a crossover between these two states, instead of a fast transition. The dimer densities tend asymptotically to the values of the “pure” planar trigone states for V0/t = 5.5. This crossover presents an interesting interaction with the plaquette phase. Fig-ures 2.13b and 2.14b show the local and global dimer densities for V3/t = −2. We still see the star phase and the planar trigone states, but no crossover is visible. In-56 CHAPTER 2. QUANTUM DIMER MODELS: V0 −V3 MODEL stead, the plaquette phase appears in-between these two states. Comparing figs. 2.14a and 2.14b, we see that the plaquette phase reduces the number of 0−plaquettes faster than the crossover process, with the cost of also reducing ⟨ˆ n3⟩. It seems that, for V3/t ≳−3, the contribution to the energy done by the ˆ ρ3 term is weak enough to make this cost small compared to the advantage of having less 0−plaquettes, and we have, as before, a first-order transition from the star to the plaquette phase. As V0/t increases, the 0−plaquette density of the plaquette phase approaches the correspond-ing density of the planar trigone states, and we have a second phase transition. The presence of resonating plaquette defects for V0/t = 5 in fig. 2.13b indicates that the four 3−plaquette trigones are formed by taking the star phase and stopping the reso-nance of three 3−plaquettes. Finally, from fig. 2.14b, one might think that this second phase transition is also of first order, but we must remember that the autocorrelation times of the MC simulations in the planar trigone phase are relatively high, and so this region must be better studied before drawing any conclusions. 2.3.3 f = 1/2 region: S2 phase On the upper left corner of the phase diagram, the V0 −V3 model favors states which maximize the number of 3−plaquettes while reducing the number of 0−plaquettes. The states that better fulfill this condition are inside the f = 1/2 flux sector, and are linked adiabatically to the classical S2 crystal (dimer covering on fig. 2.1, local dimer density on fig. 2.15, upper left corner). Bear in mind that the classical S2 crystal itself is only a ground state on the classical limit, and the application of the kinetic, plaquette flip operator on any of its 3−plaquette will create 0−plaquettes, raising the energy. For V3/t ≲−1, though, the ˆ ρ3 potential term is strong enough to compensate this effect, and the ground state local dimer density is almost identical to the classical state. Similarly to all the stripe states (section 2.2.2), the S2 states break the rotational and reflective symmetries, resulting in a 12-fold degenerate state. As we discussed in section 2.3.2, the boundary of this region with the zero flux sector follow the line V0/t = −1/2 · V3/t, which agrees with the classical limit angle θ2. As far as we can tell from our MC simulations, this flux sector does not present any internal phase transitions in this region, differently from the f = 0 sector, which presents the star-plaquette phase transition. 2.3.4 The fan region: 1/2 < f < 2 The most interesting region of fig. 2.11 is located between the S2 and staggered crystals. For large values of V0/t, this region forms a narrow vertical band roughly centered at V3/t = 0, with −1 ≲V3/t < 1. As we cross it from the left to the right, we see a series of transitions between different flux sectors, marked by thin lines on fig. 2.11, with an increasing flux density, going from f = 1/2 to f = 2. Near the RK point, this band is compressed between the f = 0 and f = 2 sectors, forming a fan-like structure centered at the RK point. Due to this last feature, we will call this region the fan region. We will describe this region in two paragraphs: on the first, we will study the transition for a constant and high value of V0/t, going from the f = 1/2 towards the staggered 2.3. REGIONS OF THE QUANTUM PHASE DIAGRAM 57 0 0.5 1 1.5 2 2.5 3 Figure 2.15: Evolution of the local dimer density ⟨ˆ ni⟩from the f = 1/2 sector to the fan region. For all simulations, V0/t = 6 , and the presented flux density sector corresponds to the model’s ground state. flux sectors; on the second part, we will set V3/t = 0. The region near the RK point will be detailed in the next section, with a perturbation analysis. Description of the fan region using the ansatz states: The best way to describe the transition from the S2 region to the fan region is by studying the evolution of the ground state’s local dimer density, ⟨ˆ ni⟩, for a series of V3/t values and a fixed and relatively high value of V0/t. On such a line, we see an evolution of the ground state’s flux sector, which starts at the f = 1/2 region, and then increases inside the fan region until we reach the V3/t ≥1 region, where the flux is maximum and equal to 2. During this progression, the local dimer density shows a stripe structure, depicted on fig. 2.15 for V0/t = 6 and −2 ≤V3/t ≤0.75, with the main distance between the stripes increasing with the flux (in accordance with eq. (2.11)). As we have said previously, these stripes themselves have structures similar to S and H chains introduced in section 2.2.2. Let us describe briefly the apparent ground state’s stripe composition seen in fig. 2.15. At V3/t = −2, the ground state is an almost classical S2 crystal, found inside the f = 1/2 sector. The stripe distance (and thus the flux density) increases with V3/t, and we have the appearance of states mixing stripes with S and H chain-like structures (and with a progressively larger percentage of the latter). At 58 CHAPTER 2. QUANTUM DIMER MODELS: V0 −V3 MODEL V3/t = 0, and for V0/t = 6, the ground state is inside the f = 0.8 sector, forming an almost pure H2.5 crystal. For 0 < V3/t ≲0.75, we have again states mixing S and H chains, but they are a lot harder to differentiate than in the previous interval. Finally, for V3/t = 0.75, the ground state is the S3 crystal. For V3/t > 0.75 (not represented on the figure), the differences between the local dimer densities are too small to allow the identification of any stripe structure. This behavior (and the flux increase) are caused by a subtle competition between the kinetic and potential energies of the V0 −V3 model. We will characterize it qualita-tively using the ansatz states, formed by the tensor product of the isolated S (classical and quantum) and H chains introduced in section 2.2.2. To do this comparison, though, we must be aware of certain caveats: • First, these tensor product states are mainly valid for V0/t ≫0, where the ground state must minimize the number of 0-plaquettes, which can only be formed be-tween the chains. Figure 2.11 indicates that the interfaces between the different flux sectors stay relatively parallel to the V0/t axis for V0/t > 4, and so we can use the ansatz states to describe qualitatively the ground states found for V0/t = 6; • Second, the mean inter-chain density obtained from eq. (2.11) is not always commensurable with the distances between the S and H chains, or at least it cannot be easily written as a function of the latter distances. This may result in non-uniform chain distributions • Third, when building these ansatz states, we completely ignore the interactions between the chains. In the measured local dimer density, these interactions can result into asymmetrical chains (see, for example, the second and third chains for f = 0.7 (d = 30/13 ≃2.308); • Fourth, while the temperature chosen for the QMC simulation is suitable for global observables, such as the mean energy, it might have some effects on lo-cal observables, such as the local dimer density, where we do not average the measurements over all the lattice’s plaquettes. With these points in mind, let us describe the ansatz states that more directly represent the dimer densities seen in fig. 2.15. Since V0/t is positive, these states must minimize the number of inter-chain 0−plaquettes, and the simplest ansatz states that we can build under this condition are • the S2 crystal seen for V3/t = −2, composed by classical S chains, for which we ignore the effects of the kinetic energy on the chain and the 3−plaquetted do not flip; • the H2.5 crystal, seen here for V3/t = 0; • the S3 crystal, formed with quantum S chains, seen here for V/t3 = 0.75; • and a crystal that uniformly alternates S and H chains, with an inter-chain distance equal to d = 2.75. 2.3. REGIONS OF THE QUANTUM PHASE DIAGRAM 59 This latter state does not appear on fig. 2.15 - it is indeed impossible to simulate it with a 60×60 lattice, since it is found inside the f = 10/11 flux sector - but we will see it in the next paragraph, where we study the V3/t = 0 case. The energy balance of the V0 −V3 model will depend on the signs of the three terms of its Hamiltonian, eq. (2.3): the kinetic energy (which is always negative), the potential energy proportional to ˆ ρ0 (which is always positive for V0/t > 0) and the potential term proportional to ˆ ρ3 (which is positive or negative, depending on the sign of V3/t). It is useful, then, to separate the analysis in two parts, for negative and positive values of V3/t. For V3/t < 0, the positive ˆ ρ0 potential energy can be compensated by both the kinetic energy and the ˆ ρ3 potential, with the latter term becoming weaker as V3/t increases towards zero. The ˆ ρ3 term will favor ground states with a high chain density, and we have seen in section 2.3.3 that, for V3/t ≪0, the state with the highest chain density and no 0−plaquettes is the classical S2 crystal. As V3/t increases towards ∼−1, the quantum fluctuations on the now quasi-classical S2 crystal also increase, and we have the appearance of 0−plaquettes between its stripes. The ˆ ρ3 potential is still strong enough to compensate them, though, and the ground state stays inside the f = 1/2 flux sector. Now, inside the −1 ≲V3/t < 0 interval, the ˆ ρ3 potential becomes weaker and weaker as V3/t increases, and the V0 −V3 model will progressively transition to ground states that will increase the kinetic energy, while still maintaining the highest chain density possible and reducing the number of 0−plaquettes. In terms of the ansatz states, this can be done by progressively exchanging, as V3/t goes towards zero, the classical S chains of the classical S2 crystal by quantum chains, which have a non-zero kinetic energy. From eq. (2.12), we have that minimal distance between a classical S chain and a quantum chain is dCl.−H = 2.25, and so the H chains (which also have small distance between themselves, dH−H = 2.5, when compared to the S chains) fill this role while maintaining a high chain density. These distances are still higher than dCl.−Cl. = 2, though, and so the flux increases as the classical S chains are exchanged by H chains. This process describes well what we see in fig. 2.15 for V3/t < 0, where we have stripes similar to the H chains separated by a distance equal to dCl.−H from stripes similar to the S chains. This also indicates that we have low quantum fluctuations on the S-like stripes, and so most of the kinetic energy is due to the H-like stripes. For V3/t = 0, the ˆ ρ3 potential term no longer plays a role, and for V0/t = 6 we have a state similar to the H2.5 ansatz state. We will study this case better in the next paragraph. For V3/t > 0, the ˆ ρ3 potential term becomes positive. The kinetic energy must now compensate both the ˆ ρ3 and the ˆ ρ0 terms. The stripes still contain most of the kinetic energy, and so states with a high stripe density will have a higher kinetic energy. At the same time, the ˆ ρ3 potential energy will become stronger as V3/t increases towards one, and states with a low stripe density will reduce it. This results in a reduction of the stripe density (and an increase of the flux density) as V3/t increases, since the kinetic term becomes less efficient in compensating the ˆ ρ3 potential, until we arrive at V3/t = 1, where the ground state is given by the staggered states of the f = 2 and f = −1 sectors. A description of the ansatz states for V3/t > 0 becomes a bit more complicated, since the local dimer densities differences on fig. 2.15 become too small to visually differentiate with certainty an S-like and H-like structures for the stripes. Still, we can propose some ansatz states for 0 < V3/t ≲0.75, considering the flux 60 CHAPTER 2. QUANTUM DIMER MODELS: V0 −V3 MODEL increase. In this region, the mean distance between the stripes is smaller than three. This suggests that the states in this region have quantum S chains and H chains, since minimal distance between the quantum S chains such that no 0−plaquettesare created on an ansats state is equal to dS−S = 3. This is in accordance with what we see for f = 0.9 on fig. 2.15. For V3/t ≈0.75, this figure presents a relatively clear dimer density, corresponding to a S3 crystal. Finally, for 0.75 ≲V3/t < 1, the dimer density differences (not shown on fig. 2.15) are too small to even identify any stripes, and so it becomes difficult to build an ansatz states. In short, we can clearly describe, using the ansatz states, the process that results in the flux increase inside the fan region for V3/t < 0. For 0 < V3/t ≲0.75, we can propose a description using these states, and for V3/t > 0.75 we can no longer verify the presence of stripe structures. To better describe the V3/t > 0 region, we must run MC simulations with an lower inverse temperature β, to avoid any low gap effects similar to the ones that we have seen for the plaquette phase near the RK point in section 1.5.4. Running them for higher values of V0/t, where the ansatz states are a more valid approximation, might also help. With the data that we have for now, the best that can be done is an analysis of the local dimer density using threshold values, allowing us to differentiate the possible stripe structures from the staggered domains. The zero flux to fan region transition, V0 model: The V3/t = 0 line of the phase diagram (fig. 2.11) deserves special attention. It allows us to study the transition from the f = 0 to the fan region, and also simplify a bit the dynamics of the V0 −V3 model: the potential term is now only proportional to ˆ ρ0, and the 3−plaquettes affect only the kinetic term. This V0 quantum dimer model was our initial proposition for an altered RK model, due to its relevancy for an Ising string-net model at the points V0/t = ±1, and the presence of a flux cascade along this line motivated us to study the more general, mixed V0 −V3 model. Also, in the context of the classical limit t →0 of the general V0 −V3 model, this line corresponds to the boundary between the S2 and the staggered crystals. It is instructive then to run the MC simulations for V3/t = 0 and V0/t ≫0, to differentiate the t →0 classical limit and the V0/t →∞, t ̸= 0 limit. To study V0 model, we wanted to use a finer density flux discretization ∆f than the one available from the 60 × 60 lattice, mainly to better describe the f = 0 to fan transition. To do so, we decided to use several system sizes, with different density flux discretizations, instead of simply increasing the system size to reduce ∆f. The latter approach is, as we discussed in section 2.1.2, very expensive numerically, and the energy per site (eq. (1.13)) of different system sizes can be easily compared if the lattices are big enough such that the finite size effects on the energy per site are negligible. Figure 2.16 shows the resulting energy crossings of the V0 model for some flux sectors between f = 0 and f = 0.8. We notice a transition from the f = 0 sector to the fan region at V0/t ≈2.54. Inside it, the ground state follows a cascade of increasing flux density: it starts at the f = 11/15 ≈0.7333 . . . flux sector, but transits almost immediately to the f = 0.75 sector at V0/t ≈2.6, before passing to the f = 0.8 sector near V0/t ≈3.28. While we can’t assure that the f = 11/15 sector is the first one visited inside the fan region, the other energy curves assure us that the flux of the first sector is above f = 0.72. By consequence, we can confidently argue that the flux sectors with 0.5 < f < 0.72 do not appear in the phase diagram of the V0 model, even 2.3. REGIONS OF THE QUANTUM PHASE DIAGRAM 61 -0.257 -0.256 -0.255 -0.254 -0.253 -0.252 -0.251 -0.25 -0.249 -0.248 -0.247 2.6 2.8 3 3.2 3.4 3.6 3.8 4 /L V0/t f 0 0.6 0.666... 0.7 0.72 0.7333... 0.75 0.8 Figure 2.16: Energies for various flux density sectors and V3 = 0, obtained through MC simulations for various system sizes, β = 9.6, ∆β = 0.01. if they are present in the full fan region, as we have seen in fig. 2.15. There is still another flux sector transition that happens for larger values of V0/t, which is a bit more complicated to pinpoint. Initially, we found a transition near V0/t ≈200, between the flux sectors with f = 0.8 and f = 10/11 ≈0.909 . . ., which correspond respectively to the chain distances dH−H = 2.5 and dS−H = 2.75. Their local dimer densities and energies per plaquette are presented on figs. 2.17a and 2.17b. Notice that the energy difference between the two states is rather small, and so we had to use a small temperature (β = 19.2) and a finer temperature discretization (∆β = 0.005) to guarantee their differentiation. It is interesting to see that the local dimer density of the ground state inside the f = 0.8 is essentially the same H2.5 crystal seen at V0/t = 6 (lower left corner of fig. 2.15), indicating that this states is stable over a large interval of V0/t. On the other hand, the ground state of the f ≈10/11 sector shows alternating S and H-like chains, separated by a mean distance dS−H = 2.75, which is the minimal distance between quantum S and H chains, in order to avoid 0-plaquettes under uncorrelated flips. Since V3/t = 0 and ⟨ˆ ρ0⟩→0, the potential term of the energy per site of these 62 CHAPTER 2. QUANTUM DIMER MODELS: V0 −V3 MODEL 0 0.5 1 1.5 2 2.5 3 V0/t = 30 f = 6/7 V0/t = 500 f =10/11 f =0.8 V0/t = 100 (a) -0.2222 -0.222 -0.2218 -0.2216 -0.2214 -0.2212 -0.221 -0.2208 -0.2206 0 200 400 600 800 1000 1200 /L V0/t f 0.8 0.909 (b)                            (c) Figure 2.17: Local dimer density ⟨ˆ ni⟩( (a) ) and energy per plaquette D ˆ HQIM E cross-ings ( (b) and (c) ) for large values of V0 (in all cases, V3/t = 0). β = 19.2, ∆β = 0.005. states is null for the V3 part, and very small for the V0 part, and we can estimate their energy per plaquette using the kinetic energy per plaquette of the isolated ideal S and H chains. The former can be calculated using the anyonic Fibonacci chains [46, 47], which share the Hilbert space with isolated S chains, but here with different Hamiltonians. The corresponding energy is (2.14) ES ≈−0.6035605(9)t, for V3/t = 0. For the ideal H chains, we can easily calculate the kinetic energy using its ideal representation, finding (2.15) EH = −0.5t. For an Hd crystal, the variational energy per plaquette, EH,d, is equal to the energy of each H chain (supposed isolated due to the distance) times the chain density, divided of course by the total number of plaquettes. For a ℓx × ℓy honeycomb lattice, we have EH,d = (ℓx · EH) · ℓy d 1 ℓxℓy = EH · 2 −f 3 →EH,d = −2 −f 6 t. (2.16) Notice that, for f = 0, we recover the E = −1/3 · t of the ideal plaquette state, from the original RK model. The dynamics will be better described if d ≥2.5 (f ≥0.8), for which there are no inter-chain 0−plaquettes. For f = 0.8, we have EH,2.5 = −0.2t, which is slightly above the value measured with the MC simulations, E ≈−0.22 · t. 2.3. REGIONS OF THE QUANTUM PHASE DIAGRAM 63 Following the same ideas as in eq. (2.16), we can calculate the energy of a variational state formed by S and H chains separated by a distance d, we only have to exchange the H chain energy EH by the average (EH + ES)/2: EMixed,d = EH + ES 2 · 2 −f 3 →EMixed,d ≈−1.1035605(9)2 −f 6 t. (2.17) In this case, the chains will be isolated if d ≥2.75, and the lowest energy is obtained for f = 10/11, with EMixed,2.75 ≈−0.20065, slightly lower than EH,2.5 but still higher than the values obtained through the MC simulation. Notice, though, that the difference between the measured energies, for V0/t ≥600, is very similar to the difference between EH,2.5 and EMixed,2.75. One problem with this analysis is that, applying it to the Sd crystals, we obtain the variational energy ES,d = ES · 2 −f 3 = −0.6035605(9)t2 −f 3 , (2.18) which, for a S3 crystal (f = 1), results into ES,3 ≈−0.201, smaller than either EH,2.5 or EMixed,2.75. During the MC simulations, we never found the S3 crystal as a ground state for V3/t = 0 and any values of V0/t, which contradicts the variational analysis. These considerations, the fact that the variational energies present a systematic shift with relation to the MC simulations, and that the chains of the f = 10/11 state present in fig. 2.17a small dimer densities asymmetries indicate that the hypothesis of independent chains should be improved. Considering this, we built an ansatz state consisting of pairs of coupled S chains, separated by a distance d = 2.5, and with each chain pair separated by a distance d = dS−S = 3. The resulting state is then part of the f = 10/11 flux sector, and calculations similar to the ones done to determinate the ES energy leads to an energy ES,Coupled = −0.212, smaller than the previous variational energies and considerably closer to the MC simulation results. This new ansatz state, with coupled chains, motivated us to do an analysis with other states mixing isolated H chains and n coupled S chains. These states have a flux following the equation (2.19) f = 4n + 2 5n + 1, n ∈{2, 3, 4, . . .}. This family of flux states contains the f = 0.8 (n →∞) and f = 10/11 (n = 2) flux sectors. Preliminary results indicate that either the f = 6/7 (n = 4) or f = 11/13 (n = 5) flux sector contains the ground state for V0/t ≳10 (see fig. 2.17c for the energy crossing and fig. 2.17a for the ground state’s dimer density for f = 6/7), but their energy is too similar, and further simulations with an even lower temperature discretization ∆β must be done before we can draw any conclusions. Finally, let us do a comparison between the behavior seen up until now for V3 = 0, V0/t ≫0 and t ̸= 0, and the classical limit t →0 at the angle θ = π/2, corresponding 64 CHAPTER 2. QUANTUM DIMER MODELS: V0 −V3 MODEL to V3 = 0. In the latter case, we found a degenerate ground state, formed by all the dimer coverings with no 0−plaquettes, which (we believe) can be found in any flux sector. Since V3 = 0 and t →0, the energy of these states is always equal to zero. For the MC simulations, we also found a ground state with no 0−plaquettes, as one should expect, but its energy is negative and issued from the kinetic term of the V0 −V3 Hamiltonian (eq. (2.3)). This ground state has a stripe structure, and thus it is degenerate due to the broken symmetries, but it is not degenerate with states from other flux sectors, with other dimer configurations. The difference between the two cases, as one could expect, comes from the kinetic term. The classical limit t →0 is only valid as an approximation of the quantum model’s ground states if these have a potential energy considerably bigger than their kinetic energy in the V0 −V3 model. This condition is not true only for the θ = π/2 angle and for the staggered region. While the potential energy is identical to the kinetic energy in the staggered region, both are zero, and so we can use the classical limit with no problems. On the other hand, the kinetic energy is non-zero for the states found at the θ = π/2 angle, and it breaks the degeneracy of the classical limit. 2.4 Perturbation analysis near the RK point Let us close this chapter with an analysis of the V0 −V3 model near the RK point, where, as we will see, we have a behavior similar to the “Cantor deconfinement” scenario studied by Fradkin et al.. Near this point, the fan region seems to be “compressed” between f = 0 and the f = 2 flux sectors (fig. 2.11), and we have said in section 2.3 that the RK point itself is infinitely degenerated: for every and any values of f, we can build a state with minimal, vanishing energy at the RK point, formed by a superposition of all the classical dimer coverings possible inside the flux sector given (eq. (2.13)). All these states are topologically isolated from each other, and so this degeneracy does not impose convergence problems for our MC measurements of the energy near the RK point. Still, we have seen in the previous chapter convergence problems near this point for the RK model, due to the vanishing gap inside a flux sector (fig. 1.10). This does not affect the measurement of the energy itself, since the difference between the energy of the ground state and the first excited state goes to zero. Even if the energy observables have a good auto-correlation near the RK point, getting a more detailed picture of this region through the Monte Carlo simulations can be complicated though, since the energies are very similar and it becomes difficult to identify the positions of the energy crossings. Fortunately, we can describe the model’s behavior near the RK point, using a perturbation analysis. For simplicity, let us fix t = 1. The first order perturbation in V0 and V3 −1 of the Hamiltonian eq. (2.3), near the RK point (V3 = 1, V0 = 0), is (2.20) ˆ HPert = ˆ HRK + V0ˆ ρ0 + (V3 −1)ˆ ρ3, which corresponds to an energy per plaquette (2.21) EPert = V0⟨ˆ ρ0⟩+ (V3 −1)⟨ˆ ρ3⟩, 2.4. PERTURBATION ANALYSIS NEAR THE RK POINT 65 0 0.2 0.4 0.6 0.8 1 -1 -0.5 0 0.5 1 1.5 2 ✁ i f 0 1 2 3 (a) 0 f1 2 ✁/2 θ1 f θ (b) Figure 2.18: (a) Plaquette densities ρi of the RK ground state from different flux sectors, as a function of the flux density f. (b) Ground state flux density f(θ) as a function of the angle θ. f(θ) is discontinuous at θ1 ≃1.84695, where it drops from f1 ≃0.195654 to f = 0, and at θ2 ≃4.8268 (not shown), where it jumps from f = 0 to f = 2. since ERK = 0. Recall that the ⟨ˆ ρi⟩’s are the expectation values of diagonal operators in the dimer basis. We will evaluate them on the unperturbed RK states of each flux sector, which, again, are the superposition of all the classical dimer coverings of a given flux sector, with the same weight for each covering. These expectation values are then given by the classical mean number of j−plaquettes found inside for a given flux sector, ρj(f). The energy per plaquette is then a function of f, (2.22) EPert(f) = V0ρ0(f) + (V3 −1)ρ3(f). The ρj(f)’s can be calculated using a transfer matrix method (see and ap-pendix F for a detailed deduction), and are represented in more succinct fashion as a function of the fermion density in the transfer matrix approach, noted n, which is 66 CHAPTER 2. QUANTUM DIMER MODELS: V0 −V3 MODEL equal to the chain density 1/d = n = (2 −f)/3. ρ0(n) =cos(nπ)(cos(nπ) + 1) −2 π2 (2.23) −n sin(nπ)cos(nπ)(n −2) + 2n −3 π(cos(nπ) + 1) −sin(nπ)(cos(nπ) −1) π3 + n2(n −1) ρ3(n) =[(2 + cos(nπ))n2 −2n + 1] sin(nπ) π(cos(nπ) + 1) (2.24) + sin(nπ)(cos(nπ) −1) π3 −n2(n −1) Let us drop for a moment the restriction of positive flux density, and consider the whole flux interval. Figure 2.18a shows the four ρj(f)’s for f ∈[−1, 2]. Notice that the values found for f = 2 and f = −1 are in agreement with the staggered states (all the densities go to zero, with the exception of ρ2 = 1). The equations for ρ1(n) and ρ2(n) are illustrated at the end of appendix F, and they follow the sum rules eqs. (2.2a) and (2.2b). Inserting eqs. (2.23) and (2.24) into eq. (2.22), we obtain a perturbative energy near the RK point. We can pass to a polar coordinate system centered at the RK point through the transformation (V3, V0) = (1 + cos(θ), sin(θ)) -there is no need of the radial coordinate since it will only be a multiplicative constant for the first order perturbation given by eq. (2.21). Figure 2.19 shows the perturbative energies as a function of f ∈[−1, 2], for some different angles θ. The energy is always equal to zero for the staggered sectors f = 2 and f = −1 (fig. 2.19a), as one should expect, and thus the ground state is in these sectors whenever the energy is strictly positive for f ∈(−1, 2). When the minima are not located in theses sectors, we have either a single minimum inside the f = 0 sector (fig. 2.19c), or two local minima, one with f ≤0 and another with f ≥0(fig. 2.19b). We verified numerically that the global minimum has always a positive flux density. This, together with our Monte Carlo tests for negative fluxes, reinforces our belief that only the positive flux sectors play a role on the phase diagram of the V0 −V3 model. We can better describe the phases near the RK point if we minimize the energy EPert(f) for each angle, and build a function f(θ), which gives to us the flux density of the ground state for each value of θ. This function is represented on fig. 2.18b, and it allow us to identify three different regions of the phase diagram, separated by the angles π/2, θ1 =≃1.84695 and θ2 =≃4.8268. Let us start the description at θ = π/2 and increase the angle progressively. For this value of θ the energy minimum is found at the staggered flux sector (fig. 2.19a). Over the interval [π/2, θ1], the flux decreases continuously from its maximum value, up to a value f(θ1) = f1 ≃0.195654. At θ = θ1, the energy has what seems to be a plateau going from f ≃−0.25 to f ≃0.5(fig. 2.19b), but an analysis of the extrema find two global minima, one at f1 and another at f = 0. For θ = θ1+ǫ, the minimum at f1 disappears and we are left with a minimum at f = 0. This means that, at θ = θ1, the minimum energy flux sector jumps discontinuously from f1 to the zero flux sector, where the minimum stays over the interval [θ1, θ2]. At 2.4. PERTURBATION ANALYSIS NEAR THE RK POINT 67 θ2, the energy is positive for all flux sectors, with the exception of the f = 0 sector, which has zero energy here, and the f = 2 sector. At θ = θ2 we have, then, a second discontinuous jump of the flux (not represented on fig. 2.18b), now from the zero flux sector to the f = 2 sector. Finally, over the interval [θ2, π/2], the energy is positive, and thus the ground state is found at the f = 2 flux sector. The positions of the three distinct regions and their interfaces are compatible with the phase diagram description done so far, with the fan, the zero flux and staggered regions meeting at the RK point. Indeed, we can compare the values of the angles θ1 and θ2 to the interfaces found near the RK point through the Monte Carlo simulations. The interface between the f = 0 and the f = 0.2 flux sectors (where the latter is the closest one to f1 that we could simulate using a 60×60 lattice) forms an angle θ′ 1 ≃1.816, while the interface between the f = 0 and the f = 2 sectors has an angle θ′ 2 ≃4.834. Both of these angles are compatible with the perturbative analysis. The discontinuity of the flux at θ1 indicates that, barring at the RK point, the flux densities inside the (0, f1) interval do not appear in the phase diagram, fig. 2.11 - although they do have an energy that is barely different than the one found at the sectors f = 0 and f = f1 (see fig. 2.19b). As we have seen while studying the V0 model in section 2.3.4, we do not see a similar interval of quasi-degenerated flux densities over the interface between the zero flux and the fan regions, far away of the RK point (fig. 2.16). This, together with the fact that no flux sectors with f < 1/2 appear for V3/t ≪0, indicates that the sectors with flux densities between f1 < f < 0.5 only appear near the RK point, and are rapidly suppressed by the zero flux sector as we get far away from this point, instead of being “compressed” into a quasi-degenerated state on the interface between the zero flux and the fan regions, as one could expect from the behavior found for θ = θ1 Cantor deconfinement: Among the three regions described above, the most inter-esting is the [π/2, θ1] region, which corresponds to the “start” of the fan region. The presence of a continuous flux transition from f = 2 to f = f1 reminds us of the Cantor deconfinement proposed by Fradkin et al.. Let us present briefly their idea. Using the height representation of the quantum dimer model, the RK point can be described by a massless Gaussian field theory . Fradkin et al. and Vishwanath et al. discussed how the effective action of the original RK model should be modified in the presence of generic perturbations, leading to flux phases different from the zero flux sector. The ground states found here, with our first order perturbation, are still ground states of the RK point, and so the flux continuum found here on the [π/2, θ1] region can be compared to the non-vanishing fluxes found in the perturbed height model. In , it was determined that a cubic interaction for the height is the leading term favoring a non-zero flux. We observed numerically, through our Monte Carlo simulations, that the inclusion of a ˆ ρ0 term to the RK quantum dimer model Hamiltonian induces a flux perpendicular to some lattice bounds of the hexagonal lattice. This indicates that the sign of this cubic interaction is negative, in the notation of Fradkin et al., and that the system is a priori gapless. However, since the height h is a compact variable (h = h+2πR), some periodical potentials V (h), preserving the height periodicity, are allowed by symmetry in the Lagrangian. In general, these terms are a cosine of the height field, Vp(h) = 68 CHAPTER 2. QUANTUM DIMER MODELS: V0 −V3 MODEL cos(p · h/R), with integer p, and they keep track of the discreteness of the microscopic height. These new potentials are relevant as soon as the flux (which in the height model corresponds to the state’s tilt, see the last paragraph of section 2.2.2) is rational , and they lead to gapped crystals, with a commensurable structure of domain walls (although their gap becomes exponentially small in 1/f close to the RK point). If an irrational flux is imposed, a gapless incommensurable structure is expected, possibly with Aubry’s “breaking of analyticity” . The phase diagram close to the RK point is thus expected to be a succession of commensurable and incommensurable phases, similar to a “devil’s staircase” and with the latter forming a generalized Cantor set. Because of this behavior, this phenomena was called Cantor deconfinement by Fradkin et al.. In the numerical simulations done by us, we can only explore a small set of the rational flux sectors, due to the finite size systems that we must use, and so strictly speaking the Monte Carlo simulations cannot reproduce completely a real devil’s staircase. Nevertheless, our results found for the fan region are fully compatible with the scenario of refs. [35, 44]. 2.4. PERTURBATION ANALYSIS NEAR THE RK POINT 69 (a) (b) (c) Figure 2.19: Evolution of the energy EPert(f) as a function of the flux density, for different angles θ. (a) minima at the staggered sectors f = 2 and f = −1, (b) local minima at intermediate values of the flux density, (a) minimum inside the zero flux sector. 70 CHAPTER 2. QUANTUM DIMER MODELS: V0 −V3 MODEL Chapter 3 Planar partitions and quantum dimer models 3.1 Partition problems: definition The generalized integer partitions (or simply partitions) were first studied by MacMa-hon at the start of the XXth century , in the context of combinatory analysis. A partition is defined as an ensemble of K integers {hi}, bounded between 0 and a maximum height p, and following a series of order relations. These relations can be represented over an oriented graph, such as the one in fig. 3.1 and, together with the constraint 0 ≤hi ≤p, they define a so-called partition problem. All the partitions that follow such relations are called the solutions of the partition problem, and they generate a corresponding configuration space. h1 h2 h3 h4 h5 h6 h7 p 0 ≥ ≥ ≥ ≥ ≥ ≥ ≥ ≥ ≥ ≥ Figure 3.1: Graph defining a partition problem. The oriented edges represent the order relations between the integers hi and the maximum height p. A partition problem is called a hyper-solid partition problem if its underlying graph forms a hyper-cubical lattice with dimensions n1 × n2 . . . nd, and if its order relations are descending on each one of the d directions. In this context, the integers form a d dimensional array with K = Qd l=1 nl elements, and each solution of a partition problem can be identified to stacks inside a d + 1 dimensional space, with the integers hi rep-resenting the stack’s heights. From here onward, we will restrict ourselves to this type of partition problem, and to lighten the notations a bit, we will identify each problem by its dimensions and its maximum height, under the notation [n1, n2, . . . , nd|p]. We will also restrict ourselves to problems following weak order relations, 0 ≤hi ≤p, i ∈{1, 2, 3, . . . , K} (3.1) hi ≥hj, i, j ∈{1, 2, 3, . . . , K} , i ̸= j and if hi − →hj, 71 72 CHAPTER 3. PLANAR PARTITIONS AND QDM’S where hi − →hj indicates that these integers are linked by an oriented edge of the problem’s graph. We are interested here in the partition problems with d = 1 (d = 2), which are called respectively linear (planar) partition problems. Their solutions can be simply presented as matrices, as in fig. 3.2, and map into stacks inside a 2D (3D) space (see fig. 3.3 for the planar partitions). This graphical representation will be specially useful to build a map between the planar partitions and the dimer coverings on an hexagonal lattice. A more detailed study of the general integer partition problems and its applications can be found in refs. [21, 54, 20]. p h1 h2 h3 h4 h5 h6 h7 h8 h9 h10 h11 h12 0   h1 h2 h3 h4 h5 h6 h7 h8 h9 h10 h11 h12   p h1 h2 h3 h4 0 h1 h2 h3 h4  Figure 3.2: Graphs associated to a [4, 3|p] planar partition problem (above) and a [4|p] linear partition problem (below), and their representations as matrices. 0 0 0 0 1 0 0 0 1 1 0 0 0 1 1 0 0 0 0 2 1 1 2 2 1 1 2 2 1 2 2 2 2 2 2 2 0 1 1 2 1 1 1 2 0 0 0 0 Figure 3.3: Stack representations of some planar partitions for n1 = n2 = p = 2 ([2, 2|2] partition problem type). 3.2 Applications of the partition problems Our interest in studying the partition problems comes from the equivalence between their configuration spaces and those of certain physical models. This approach was used to study classical systems such as the random tiling models and the classical dimer models , and, as far as we are aware, was rarely applied to quantum models. In this thesis, we are interested in the equivalence between the planar partitions and the classical rhombus tiling and, mainly, the classical dimer models on a honeycomb lattice, and to a certain extent the one between the linear partitions and the classical spin chains. As we have argued in the first chapter (section 1.1.1), the configuration space 3.2. APPLICATIONS OF THE PARTITION PROBLEMS 73 0 0 0 0 Figure 3.4: Equivalence between the [2, 2|2] planar partitions, the random rhombus tilings inside a 2 × 2 × 2 hexagon, and dimer coverings over an hexagonal lattice with 7 cells. In all cases, the configuration corresponding to the empty partition is taken as a reference state. of these classical systems can be used to build an orthonormal base of the corresponding quantum version’s Hilbert space. With this in mind, we propose a quantum partition model (QPM), which can be used as a common framework for these different physical systems. Notice that, in ref. , a slightly different QPM was also studied. We will start this section with a description of these classical systems, followed by the definition of the quantum partition model and the correspondence with the quantum dimer model and the Heisenberg XXZ model. 3.2.1 Classical systems A classical system can be mapped to a partition problem if it follows three conditions. First, it must be composed by a fixed number of primitive elements (dimers, spins, tiles ...) and, second, it must have a local operation (informally called ”flip”) that exchanges the position of two or more of its primitive elements. For such systems, one can choose a reference state and use flip operations to build all the possible configurations accessible through local operations - which amounts to the system’s whole configuration space in the ergodic case. The number and distribution of flip operations can be encoded into a table of integers following order relations, or in another words, a partition, if the physical constraints of the classical system can be translated into order relations (third condition). If these conditions are respected, we have a one-to-one correspondence between the partitions and the classical system’s configuration, with a flip operation being equivalent to adding or removing one from one of the partition’s coordinates hi. A few examples of systems that can be described using partition problems, when subjected to specific boundary conditions, are the rhombus tilings model, the classical dimer coverings, and the classical spin 1/2 chains. 74 CHAPTER 3. PLANAR PARTITIONS AND QDM’S Figure 3.5: Arctic circle for a rhombus tiling. Figure taken from ref. . Rhombus random tiling model: This classical system has the most direct map-ping with the partition problems, specifically to the planar partitions problems [n1, n2|p]. Graphically speaking, we can build it just by taking the columns representation of a partition, seen from the direction (1, 1, 1), and projecting it to a plane (fig. 3.4). This projection transforms the faces of the cubes into rhombus tiles with three different ori-entations, constrained inside an hexagon with sides n1 × n2 × p. The empty partition maps into the reference state (left-hand side of fig. 3.4), with the flipping operation being the rotating of an hexagon formed by three rhombus tiles. The equivalence to increasing/decreasing one of the partition’s coordinates hi is straightforward from the figure. The order relations between the hi’s impose that there are no defaults on the rhombus tiling - there are no triangular tiles, which would appear if a column could overhang another one behind it - and the dimensions n1 and n2, together with the maximum height, impose the fixed boundary condition. This type of mapping can be applied to more complex partition problems than the planar partitions, generating tiling models with other kinds of tiles, but this is not the focus of this thesis. A more detailed study of these classical models using the framework of the partition problems was the object of Nicolas Destainville’s thesis . The rhombus tiling models inside an hexagon present a very interesting phe-nomenon. For an sufficiently large system, most of the rhombus tilings present two distinct domains: a central region inside which we have a random tiling (all the three possible tile types are present, distributed randomly), surrounded by the corners of 3.2. APPLICATIONS OF THE PARTITION PROBLEMS 75 the hexagon, where the tiles are “frozen” (inside each corner, all the tiles have the same orientation). An example can be seen in fig. 3.5. The curve delimiting these two domains is called the arctic circle, or the frozen boundary, and this phenomenon is also present for other types of random tilings and constrained systems [25, 56, 57, 58]. In our case, this curve has an elliptic shape (or a circle if n1 = n2 = p) tangent to the six edges of the bounding hexagon. It can be shown that, for an infinitely large system, the probability of randomly choosing one rhombus tiling presenting the arctic circle tends to 1 . Dimer coverings: It is possible to build a straightforward equivalence between the planar partition problems and the dimer coverings on a honeycomb lattice, like the ones studied in the previous chapters, using the rhombus tilings as an intermediate step. Each dimer covering can be mapped to a rhombus tiling by associating each dimer to a single rhombus, in such a way that the dimer is localized on the rhombus’s longest diagonal (see fig. 3.4). Since the rhombus tilings restricted to an hexagonal domain are equivalent to the planar partitions, we can use this mapping to link this class of partitions to the dimer coverings on an also restricted honeycomb lattice. In this case, the flip operation corresponds to a rotation of a plaquette with three dimers (thus identical to the flip operations used for the quantum dimer model, see chapter 1), and the partition’s order relations impose that all the honeycomb lattice vertices are linked to one and only one dimer. A dimer covering equivalent to a [n1, n2|p] partition will be restricted to an hexagonal cut of the honeycomb lattice, with dimension n1 ×n2 ×p and a number of plaquettes equal to (3.2) LPart = n1(n2 −1) + n2(p −1) + p(n1 −1) + 1. Also, since we have an exact mapping between the dimer coverings and the rhombus tilings, the former also present the arctic circle phenomenon, which in terms of dimers densities translates into a circular region with a random dimer covering surrounded by six staggered corners. This can be easily visualized by taking the representation of the arctic circle for the rhombus tilings, fig. 3.5, and inserting the dimers inside the rhombus. Spin chains: The two examples above involved mappings of a planar partition prob-lem, but in the case of the spin chains, we have a mapping with the linear partition problem [n1|p]. Consider the configuration space formed by the (classical) open spin chains with n1 + p spins, divided into p spins ↑and n1 spins ↓, and thus with a fixed magnetization (p−n1)/2. All the possible chain configurations can be using a reference state and the spin pair flip operation ↓↑⇔↑↓. Let us take as the reference state the spin chain with all the spins ↓to the left and all the spins ↑to the rights, (3.3) ↓. . . ↓ | {z } ×n1 ↑. . . ↑ | {z } ×p . We can index each spin chain by counting, for each spin ↓, how many steps to right it was moved from its original position in the reference state. Let us note this number for the ith spin ↓as xi. Since we cannot flip two neighboring spins with the same sign, 76 CHAPTER 3. PLANAR PARTITIONS AND QDM’S we have that maximum number of steps available for the ith spin ↓is bounded by the number of steps taken by the (i + 1)th spin ↓, which itself is bounded between 0 and p. We have then the order relations chain (3.4) p ≥xn1 ≥xn1−1 ≥. . . ≥x2 ≥x1 ≥0, for the positions of the spins, which is identical in form to the linear partitions order relations, up to an index exchange. There is then a bijection between the configuration spaces of the classical spin chains, under these constraints, and the linear partitions of the problem [n1|p]. Again, we have an equivalence between the flip operations of the classical system and the partition problems. In this case, the order relations impose open boundary conditions on the chains, and the total magnetization of the spin chain, invariant by the flip operations, can be written in function of the partition problem’s dimensions as (p −n1)/2. 3.2.2 Boundary conditions of the 2D systems Figure 3.6: Different visualizations of a planar partition: (a) stacks and rhombus tilings, (b) dimer coverings and (c) classical spins (full and empty circles correspond, respectivelly, to spins “down” and “up”, the red circles are part of the boundary condition and cannot be flipped). The mappings between the partition problems and the classical systems above im-pose significant conditions on the boundaries of the systems considered. We have seen, for example, that the rhombus tiling models equivalent to partition problems must have a fixed boundary condition. This imposition has a considerable effect on the pos-sible configurations and on the configurational entropy, with phenomenons such as the arctic circle. Since we will use these classical states to build a base of the Hilbert space, these effects will have consequences on the corresponding quantum models, and thus it is important to fully understand them. Here, we will focus on the planar partitions and the classical dimer models. Figure 3.6a shows a partition of the [3, 3|3] problem in its stack representation, and fig. 3.6b shows the corresponding dimer covering. We can see that the fixed boundaries condition of the random tilings, linked to the dimensions and the maximum height of the partition problem, translates into a honeycomb lattice formed by closed plaquettes, and arranged as an hexagon with sides n1 × n2 × p. This figure also shows the superposition of the random tiling with the other representation of the dimer models that we used in chapters 1 and 2, namely the (classical) Ising model (fig. 3.6c). In this case, we have an hexagonal patch of the triangular lattice, 3.2. APPLICATIONS OF THE PARTITION PROBLEMS 77 Figure 3.7: Dimer configurations under the partition boundary conditions that min-imize (left) and maximize (right) the number of flippable plaquettes, for n1 = n2 = p = 4. The blue dimers are part of a flippable plaquette. with fixed boundary conditions, where we find alternating, fixed spins (marked with red circles on the figure). The edges of the rhombus tilings connect the antiferromag-netic spin pairs. In the dimer notation, these alternating spins guarantee that the external vertices of the honeycomb lattice are linked to one dimer. For convenience we will refer to these boundary conditions as partition boundary conditions. Let us return now to the classical dimer coverings. The planar partition prob-lems used to represent them here are ergodic, and by consequence we have only one topological sector. Also, it should be expected that the states minimizing and max-imizing the number of flippable plaquettes are different from what we have seen in chapter 1. Indeed, a simple hexagonal cut of the original star and staggered crystals will leave some vertices of the honeycomb lattice with no dimers. The dimer coverings that minimize the number of flippable plaquettes (fig. 3.7, left) present three staggered domains, meeting at the central plaquette, which is flippable and the only one with a number of dimers different from two. This is important, because it means that the new “staggered” states are not disconnected from the rest of the configuration space. These coverings are equivalent to either the empty or the full partitions, depending on the arrangement of the staggered domains. The dimer covering that maximizes the number of flippable plaquettes (fig. 3.7, right) has an interesting structure, presenting an hexagonal star domain surrounded by other six staggered domains, with the latter work as a transition or buffer between the partition boundary conditions and the former. The corresponding partitions are, as one should expect, the ones with a maximum number of neighbors inside the configuration space. For the symmetrical planar problems, with n1 = n2 = p, there are two such partitions if p is odd, symmetrical by a rotation around the axis (1, 1, 1), and one if p is even, and the inner hexagon touches the boundaries of the lattice near (or at, if p is even) the center of its edges, maximizing its surface, and thus the number of flippable plaquettes. Finally, the interfaces between the star and the staggered domains are akin to the ones between the staggered domains and the S chains seen in the V0 −V3 model. 78 CHAPTER 3. PLANAR PARTITIONS AND QDM’S 3.2.3 Quantum partition model, QPM We will now build the framework for a quantum partition model (QPM), which can be used to described the quantum versions of the classical models discussed in 3.2.1. The most straightforward way to add quantum dynamics to a partition problem is by fiat, using the problem’s configuration space as an orthonormal base of the quantum model’s Hilbert space. Let us identify each element of this classical base as a ket |A⟩. For any two elements |A⟩and |B⟩of this base, the orthonormal relation is written as (3.5) ⟨A | B⟩= δA,B, and any quantum partition |ψ⟩QP can be written as a superposition of the classical states, (3.6) |ψ⟩QP = X A cA |A⟩, where the sum is over the whole classical configuration space F. We propose to asso-ciate to this Hilbert space the Hamiltonian (3.7) ˆ HQPM = −t X A X ⟨B⟩A |A⟩⟨B| + V X A nA |A⟩⟨A| , where the sums P A and P ⟨B⟩A are, respectively, over all the states of the orthonormal base and over all the nA states neighboring |A⟩- i.e. states that differ from |A⟩only by a single flip operation. The first term is a kinetic one, and it represents the transfer energy between neighboring partitions of the orthonormal base. The second term is a potential energy, proportional to the number of other base states accessible by a given base state. RK model: The structure of eq. (3.7) is very similar to the one used for the RK model, eq. (1.1), studied in chapter 1. Indeed, it is straightforward to map these two Hamiltonians. We can rewrite eq. (3.7) with local flip operators ˆ fi and ˆ f† i , which respectively add or remove one from the coordinate xi, (3.8) ˆ HQPM = −t d X i  ˆ fi + ˆ f† i  + V d X i  ˆ fi · ˆ f† i + ˆ f† i · ˆ fi  , The mapping between ˆ HQDM and ˆ HQPM can be done by exchanging the operators ˆ fi and ˆ f† i by their equivalents in the dimer notation, | i⟩⟨ i| and | i⟩⟨ i|. The sums of eq. (3.7) are over all the d coordinates hi of the classical partitions, which is different from the total number of plaquettes used in eq. (1.1). Still, this does not pose a problem since the only non-zero terms of both Hamiltonians are associated to the corresponding flippable elements, which have a one-to-one equivalence between the two models. This mapping of the Hamiltonians, together with the equivalence between the two model’s orthonormal bases, allows us to use the quantum partition model to study the quantum dimer model, under the constraints imposed by the partition boundary conditions. 3.2. APPLICATIONS OF THE PARTITION PROBLEMS 79 As with eq. (1.1), ˆ HQPM also presents a RK point. Rewriting eq. (3.7) as a sum of projectors with V = t, we find (3.9) ˆ HQPM,RK = V X ⟨A,B⟩ [|A⟩−|B⟩] · [⟨A| −⟨B|] , where ⟨A, B⟩is a sum over all the neighbouring pairs of partitions. As with the QDM, each term of this sum is either equal to zero or one, and the ground state is a superposition of all the base states with equal weight, (3.10) |ψRK⟩= 1 √PF X C |C⟩, where PF is the total number of classical states. Recall that, for a large enough system, most of the classical dimer coverings composing this sum present an arctic circle. This means that near or at the RK point we should be able to identify such structure on, for example, the local dimer density (see section 3.3.1). Heisenberg XXZ spin chains: It is also simple to build the equivalence between the QPM associated to linear partitions [n|p] and the XXZ 1/2-spin chain model. This model is a specialization of the more general Heisenberg model, which was solved exactly by Hans Bethe in 1931 for the 1D periodic 1/2-spin chains. Let us consider the Hamiltonian of this model for an open 1/2-spin chain of length N, (3.11) HXXZ = − N−1 X m=1  t σ+ mσ− m+1 + σ− mσ+ m+1  + 1 2V σz mσz m+1  . where σz is the Pauli matrix associated to the z direction, and σ± m = (σx m ± iσy m)/2 are the spin flip operators. The effect of the first term is to flip two neighbouring and opposite spins, ↑↓⇔↓↑, and the second term counts the difference between the number of ferromagnetic and anti-ferromagnetic spins (noted respectively ˆ NF and ˆ NAF . It should be noted that the total magnetization of the spin chains - which we noted in the previous section as linked to the linear partition’s parameters n and p - is invariant by this Hamiltonian. The kinetic term of eq. (3.11) is already in a form comparable to eq. (3.7), but we still have to work a bit on the potential term. Inserting the relation  ˆ NF + ˆ NAF  |ψ⟩= (N −1) |ψ⟩, 80 CHAPTER 3. PLANAR PARTITIONS AND QDM’S valid for any spin chain |ψ⟩, inside eq. (3.11), we have HXXZ = −t N−1 X m=1 σ+ mσ− m+1 + σ− mσ+ m+1  −1 2V  ˆ NF −ˆ NAF  = −t N−1 X m=1 σ+ mσ− m+1 + h.c.  + 1 2V ˆ NAF −1 2V ˆ NF = −t N−1 X m=1 σ+ mσ− m+1 + h.c.  + 1 2V ˆ NAF −1 2V  N −1 −ˆ NAF  HXXZ = −t N−1 X m=1 σ+ mσ− m+1 + h.c.  + V ˆ NAF −1 2V (N −1) . (3.12) Since ˆ NAF counts exactly the number of flippable spin pairs, the XXZ Hamiltonian above is equivalent to the one for the QPM, up to a constant term 1 2V (N −1), identical for all spin chains. 3.3 Monte Carlo simulations -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 -5 -4 -3 -2 -1 0 1 /L V/t [3,3|3] planar QPM, L = 19 Exact diagonalization Monte Carlo Figure 3.8: Comparison between the QMC and the ED: energy per plaquette for a [3, 3|3] partition model, for both numerical methods. A consequence of the equivalence between the planar quantum partition model and the RK model under partition boundary conditions is that we can use the Monte Carlo algorithm presented in the first chapter (section 1.3.3) to simulate the former. The algorithm itself is independent of the boundary conditions of the underlying system, so to simulate the planar QPM we only have to enforce the new boundary conditions on the state chosen to initialize the simulations. We recall that our MC algorithm simulates the RK model using a 3D classical Ising model, with N stacks of 2D triangular lattices, each one dual to the original honeycomb lattice and thus with a number of sites 3.3. MONTE CARLO SIMULATIONS 81 0 0.05 0.1 0.15 0.2 0.25 0.3 -5 -4 -3 -2 -1 0 1 1/2 V/t (a) 0 0.2 0.4 0.6 0.8 1 -5 -4 -3 -2 -1 0 1 < ✁ i> V/t 0 1 2 3 (b) Figure 3.9: (a) RMS magnetization ⟨ˆ m2⟩1/2 and (b) normalized numbers of j-plaquettes, ⟨ˆ ρj⟩as function of V/t for a [60, 60|60] planar quantum partition model. equal to the number of plaquettes of the latter - in our case, equal to LPlaq (eq. (3.2)) - and that the simulation precision depends on the inverse temperature, β, and the inverse temperature discretization, equal to ∆β = β/N. The partition boundary conditions can be imposed on this 3D CIM by forcing fixed boundaries with alternating spins up and down on each layer, as in figure fig. 3.6c. To verify this, we compared the ground state energy, ⟨ˆ H⟩QIM, eq. (1.13), of a [3, 3|3] planar QPM, obtained through the MC algorithm, to the one obtained through exact diagonalizations. The results, presented on fig. 3.8, indicate that the algorithm works properly with the new boundary conditions. 3.3.1 Effects of the boundary conditions on the RK model Considering how different the classical dimer coverings are under the partition bound-ary conditions, seen in fig. 3.7, when compared to the staggered and star crystals found for the periodic boundary conditions, one should expect that the RK model under these conditions behaves differently from what we have seen in the first chapter. To study it, we ran simulations for various system sizes with symmetrical dimensions n1 = n2 = p, each one containing 3p(p −1) + 1 plaquettes, and using the same tem-perature parameters as for most of the simulations from the first chapter, β = 19.2 and ∆β = 0.02. In this subsection, we will discuss these differences, using the RMS magnetization, ⟨ˆ m2⟩1/2 (eq. (1.10)), the number of sites with j-plaquettes, ⟨ˆ ρ3⟩(1.11), and the local dimer density, ⟨ˆ ni⟩, since the effects of the first order phase transition at (V/t)C = −0.228 ± 0.002 were quite visible for these order parameters in the periodic case. From here onward, we will refer to the RK model under the partition bound-ary conditions as a planar QPM, to lighten the notation and differentiate it from the original RK model when discussing the differences between the two. 82 CHAPTER 3. PLANAR PARTITIONS AND QDM’S Magnetization and number of j−plaquettes: Figure 3.9 shows the magnetiza-tion and the normalized number of j−plaquettes for a [60, 60|60] planar QPM, which corresponds to a QDM with LPart = 10621 plaquettes. It is clear that these curves do not behave in the same way as ones obtained for the periodic RK model (fig. 1.3 and fig. 1.6). Let us start with the magnetization. For large and negative values of V/t, the ground state is similar to the classical dimer covering with a maximum number of flippable plaquettes, portrayed on fig. 3.7, and ⟨ˆ m2⟩1/2 attains its maximum value, just as in the periodic case. Here, this maximum is equal to ⟨ˆ m2⟩1/2 = 1/4 = 1/3 · (3/4) because only the hexagonal star domain, which covers ∼3/4 of the plaquettes of the lattice, contribute to it. The magnetization also becomes constant and almost zero from V/t ∼(V/t)C until the RK point, again in accordance with what we have seen for the periodic case. The biggest difference is the presence of a series of small magnetization drops at V/t ∼−2.75, ∼−2 and −1.8, far before the critical value (V/t)C = −0.228 ± 0.002 of the star / plaquette phase transition seen for the original RK model. These drops are too smooth to characterize single first order phase tran-sition, but they can either represent a crossover between two states or a cascade of local transitions. The fact that the simulation’s auto-correlation increases considerably during these drops points to the latter. Finally, we still have something akin to a star / plaquette transition, near the original (V/t)C, but the amplitude of the magnetization is greatly reduced. The behavior of the number of j−plaquettes is similar. We do not have discontinu-ities of the densities, instead we have smooth shifts of ⟨ˆ ρ0⟩, ⟨ˆ ρ1⟩and ⟨ˆ ρ2⟩, at the same regions as the continuous drops of the magnetization. We cannot identify any discon-tinuities near the critical value of V/t either, but we can see a change in the densities’ behaviors. More importantly, the number of 3-plaquettes, which is still equal to the derivative of the energy with respect to V (section 1.5.2) do not present any visible shifts or discontinuities, which reinforces the absence of a first order phase transition. It should be noted here that these observables follow a modified sum rule, different from the one seen in eq. (1.12) due to the different boundary conditions. Dimer density: The local dimer densities present a rather curious behavior. Fig-ure 3.10 shows the two dimensional plots of ⟨ˆ ni⟩of a [60, 60|60] planar QPM, for various values of V/t. For V/t = −5 . . . −2.74, the dimer density have the same structure as the classical dimer covering that maximizes the number of flippable plaquettes, with an hexagonal star domain (which we will call the bulk star domain from here onward) and six staggered corners. Notice that, for V/t = −2.6, the lower and the lower left borders present bands that are structurally similar to, respectively, isolated H and S chains seen in the previous chapters. It should be noted here that the fact that not all borders present these bands for this value of V/t is in accordance with the high auto-correlations found for the ⟨ˆ ρi⟩’s and ⟨ˆ m2⟩1/2, indicating that the MC simulation is not converged. For V/t = −2.32, just after the first magnetization drop, all the sides present S chain-like bands, and the simulation is converged. As V/t increases further, we have an interesting pattern: new H chain-like bands start to appear, separated by S chain-like bands while the old ones are pushed inside the staggered corners. These new bands form “arcs” with the corners of the original bulk region as fixed points, and grow towards the center of the bulk region as V/t increases. We will call the 3.3. MONTE CARLO SIMULATIONS 83 0 0.5 1 1.5 2 2.5 3 Figure 3.10: Local dimer density, ⟨ˆ ni⟩, of a [60, 60|60] planar QPM, for various values of V/t. 84 CHAPTER 3. PLANAR PARTITIONS AND QDM’S region covered by these band patterns the band domain, and the S and H chain-like structures as S and H bands. The band domain keeps increasing until we approach the critical (V/t)C of the periodic RK model, where the bulk star region (or what is left of it) becomes a bulk plaquette region (accounting for the magnetization drop seen in fig. 3.9a. Finally, as we approach the RK point, the dimer densities smooth out, in a similar fashion to the periodic RK model. Figure 3.11: Spin inversion between two star domains separated by a plaquette band. For this illustration it is enough to consider the classical approximation of the star domain. 1.8 1.85 1.9 1.95 2 2.05 2.1 Figure 3.12: Arctic circle: (left) average local dimer density ⟨ˆ ni⟩over 2.5 · 108 random partitions of the [30, 30|30] planar partition problem, and (right) ⟨ˆ ni⟩for the corre-sponding QDM, near the RK point (V/t = 0.95). The dashed line indicates the arctic circle. The color scale of this figure covers a smaller interval than the previous ones, to increase the contrast between the frozen corners and the interior of the circle. We can use these densities to interpret the smooth drops and shifts present on fig. 3.9. We have seen in the first chapter that a plaquette phase has almost zero magnetization (section 1.5.3), and so the presence of these H bands will reduce the total magnetization. Not only that, but one can show that two star regions separated by a plaquette region will have opposing spin signs (see fig. 3.11, for this illustration 3.3. MONTE CARLO SIMULATIONS 85 it is enough to consider the classical approximation of the star domain). This means that the S bands that appear will have alternating spin signs, reducing even more the magnetization. Also, the H bands have a different dimer density composition than the star domain, explaining why we have the smooth shifts in the numbers of j-plaquettes. We can also use the local densities to visualize the effects of an arctic circle phenomenon near the RK point, as we have described on section 3.2.3. We did not manage to make the mean local dimer density converge for the MC simulations at exactly the RK point, but we can see traces of the arctic circle for V/t = 0.95, where we see a rounded hexagon. Figure 3.12 shows the mean dimer density for this value of V/t (right) and for 2.5·108 random dimer covering samplings, with acceptance equal to 1, for a [30, 30|30] planar QPM, and we can clearly see the formation of an approximated arctic circle. Before we pass to a detailed description of the band regions, let us make a final remark about the arctic circle. This phenomenon is normally observed and described in the context of classical models, but its presence near the RK point allows us to do an interesting interpretation of the dimer densities seen on fig. 3.10. Namely, we can see them as a transition between quantum arctic regions found at the corners of the dimer densities. The ground state for V/t ≪0, presents an arctic hexagon with six frozen triangular corners, due to the effects of the potential energy. When the band regions start to appear, these corners are progressively transformed and reduced, until we arrive at the RK point, where we have a quantum arctic circle, since the ground state given by eq. (3.10) is a quantum state. 3.3.2 Description of the band regions We will propose now an interpretation for the local dimer density behavior, using the S and H chains introduced in the previous chapter (section 2.2.2). Essentially, the appearance of the band domains can be seen as an exchange process happening on the borders of the bulk star domain, where the outermost (connected) S chains of the latter are progressively exchanged by isolated S and H chains. We can see this exchange process on fig. 3.13, where we have a zoom in of fig. 3.10, near the original boundary between the bulk domain and the lower staggered corner. At V/t = −2.6, we can identify a H chain appearing on the bulk domain border, which becomes an isolated S chain at V/t = −2.5, where the MC simulation is properly converged. More chains of the bulk star crystal are exchanged by isolated chains as V/t increases (for example, at V/t = −1.86), with the exchange happening now on the interface between the bulk region and the new band region. Notice that, to accommodate the new isolated chains, the previous ones move towards the staggered region, reducing its size. This can be seen clearly by observing how the original isolated S chain gets further away from the original interface as V/t increases, and how it flips its orientation in the process. This whole process can be visualized in terms of the height model notation of section 2.2.2. Recall that a staggered state is a tilted (1, 0, 0) plan in this notation, and the star state is a “flat” (1, 1, 1) plan (fig. 2.10). The interface between the staggered and the bulk domains seen for the QPM’s is, then, a region where the slope changes discontinuously. We can thus say that the exchange process smooths this discontinuity, since adding a S or a H chain to the staggered state reduces its slope, and conversely, removing an 86 CHAPTER 3. PLANAR PARTITIONS AND QDM’S 0 0.5 1 1.5 2 2.5 3 Figure 3.13: Local dimer density near the interface between the star and staggered domains, and formation of S and H bands. The position of the initial interface is marked with a black line. S chain of the star state increases it. The curved form of the H and S bands can be explained by the corners of the hexagonal original bulk domain, where the domain interfaces meet. In terms of the QIM notation, these corners are the only points where the bulk domain enters in contact with the (fixed) spins of the boundary conditions. This means that the chain exchange cannot be done on the two 3-plaquettes of this corner (see fig. 3.14 for V/t = −2.74, the plaquettes are marked with a black rectangle), because there are no 2−plaquettes behind them to be “consumed” by the exchange process. This blocks this process over the line going from the corner to the center of the bulk domain: they stay in a star-like configuration (although not forming S chains) until we have the star / plaquette phase transition of the bulk, fig. 3.14. Each band domain, then, is isolated and restricted to grow inside a triangular sector of the bulk domain. Due to this, each consecutive chain exchange removes a smaller chain from the latter, resulting in bands that are thicker at the center, and thus curved bands and a curved interface with the staggered corners. 3.3. MONTE CARLO SIMULATIONS 87 0 0.5 1 1.5 2 2.5 3 Figure 3.14: Local dimer density near the corner of the star bulk domain. Energy analysis: We can illustrate that this process creates ground states for the planar QPM with a lower energy than what we would have if this system had the same behavior of the periodic RK model by comparing their respective energies per plaque-tte, ⟨ˆ HQPM⟩/L and ⟨ˆ HQDM⟩/L. Of course, this comparison cannot be done directly using the curves of section 1.5 because the former has a large number of staggered pla-quettes that do not contribute to the energy, but we can rescale the curve of the periodic RK model so that it coincides with the planar QPM for V/t ≪0 and at the RK point, giving to us what is effectively the energy of the planar QPM if it had the same phase diagram as in the periodic case. Figure 3.15a shows the energy ⟨ˆ HQPM⟩/L, obtained for a [60, 60|60] problem, and the re-scaled energy ⟨ˆ HQDM⟩/L, obtained from a 60×60 peri-odic lattice, while fig. 3.15b shows their difference ∆⟨ˆ H⟩/L = ⟨ˆ HQDM⟩/L−⟨ˆ HQPM⟩/L. Notice that, while ⟨ˆ HQPM⟩/L is smaller over the whole interval, we have two shifts of the difference near V/t ∼−2.5 and V/t ∼−2, where we have the appearance of the first and the second isolated S chains. This indicates that the states with the band regions indeed have a lower energy due to the presence of the isolated S and H chains. We conjecture that these chains allow the kinetic energy to increase without reducing too much the potential energy, differently from a dense S1.5 state, where increasing the quantum fluctuations due to the kinetic term reduces the number of 3−plaquettes, since 88 CHAPTER 3. PLANAR PARTITIONS AND QDM’S -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 -5 -4 -3 -2 -1 0 1 Renormalized V/t [60,60|60] partition 60x60 periodic (a) 0 0.005 0.01 0.015 0.02 0.025 -5 -4 -3 -2 -1 0 1 ✁ V/t (b) Figure 3.15: (a) Energy comparison and (b) energy difference between the renormalized energy of the periodic RK model and the planar QPM. flipping a 3−plaquette in such a state will transform three neighboring 3−plaquette into 2−plaquettes, and thus the potential term. 3.4 Simplex method Initially, we decided to study the planar partition problems not only due to their equivalence to the quantum dimer model studied in details in the previous section, but also because of some properties of its configuration space. Essentially, as explained below, they allow us to approximate the Hamiltonian of a planar QPM [n1, n2|p] by an adapted Hamiltonian associated to a linear QPM [n1 · n2|p]. Since the Hilbert space of the latter is considerably smaller than the former, this should allow us to reduce the calculation time needed to find the ground-state through diagonalizations. We will present how to use this approximation in this section, through a method that is a priori original and that we called the simplex method. But, before describing the method itself, we must describe briefly some properties of a classical partition problem’s configuration space. To avoid any confusion between the diagonalization of the full planar QPM and of the approximate linear QPM obtained through the simplex method, we will use the usual term “exact diagonalization” only for the former. 3.4.1 Configuration space Let us start by describing briefly the configuration space of a classical hyper-solid partition problem. Each solution of a partition problem [n1, n2, . . . , nd|p] can be as-sociated to a point (h1, h2, . . . , hK) = h in a K-dimensional space, called a integer point, with the coordinates hi following the order relations defined by the eq. (3.1). These inequalities determinate the geometry of the configuration space: each one of 3.4. SIMPLEX METHOD 89 them define a half-space in this K-dimensional space, and so their intersection defines a convex K-polytope F. It is important to emphasize that this polytope contains the configuration space of the partition problem, which is formed only by its (discrete) in-teger points. Since the hi’s are by definition integers, it is easy to build a graph linking all the integer points of the configuration space, usually noted as T . In general, two partitions h and h′ are neighbors, and thus linked directly on the graph T , if all their heights but one are identical, with the distinct heights hj and h′ j differing only by one, hj = h′ j ± 1. In the stack representation of the partitions, this corresponds to add or remove one of the boxes. The most straightforward class of partitions problems that we have are the linear partitions, with parameters [n1|p] = [K|p]. In this case, the relations eq. (3.1) become a simple chain of K + 1 inequalities, with each one defining one of the configuration space’s sides: (3.13) p ≥h1 ≥h2 ≥. . . ≥hn1 ≥0, Such a polytope, with K +1 sides living in a K-dimensional space, is a K-dimensional generalization of a triangle defined as a simplex S, and as one should expect, the simplices are the simplest polytopes one can construct. Figure 3.16 shows the polytopes F[3|p] and the graphs of T [3|p] of a linear partition problem with n1 = 3 and various values of p. bc (2,0,0) x1 x2 x3 bc bc (1,0,0) (1,1,0) (1,1,1) O bc bc x1 x2 x3 bc bc (2,0,0) (2,2,0) (2,2,2) O bc bc x1 x2 x3 bc bc (3,0,0) (3,3,0) (3,3,3) O bc bc bc bc bc bc bc bc bc (2,2,0) (2,2,2) O bc bc bc bc bc bc bc bc bc (1,0,0) (1,1,0) (1,1,1) O bc bc bc bc (2,1,0) (1,0,0) (2,2,1) bc bc bc bc bc bc bc bc bc bc bc bc bc bc b (3,0,0) (3,3,0) (3,3,3) b b bc bc b b b b bc bc b b bc bc bc bc bc b bc bc p = 1 p = 2 p = 3 Figure 3.16: Polytopes F and integer points of the linear partition problems [3|p], with p = 1, 2 and 3 (above), and the corresponging graphs T [3|p] (below). Raising p increases the size of the polytope, without changing its geometry. There are two important properties of the partition problems, which will reflect into the geometrical symmetries of the equivalent physical systems. First, raising the value of the dimensions {nl} or of the maximum height p will, as expected, increase the number of integer points, but they do so in different manners. The former are directly linked to the dimensionality K of the polytope and the number of inequalities (eq. (3.1)) defining its form, and so changing them will modify its geometry. The maximum height p, on the other hand, determinate the maximum possible value of a coordinate hi, but do not alter the number of inequalities and thus changing it without 90 CHAPTER 3. PLANAR PARTITIONS AND QDM’S Figure 3.17: Equivalence between 3 planar partition problems. b b b b b b b b b b b b 0 0 1 0 2 0 3 0 1 1 2 1 3 1 2 2 3 2 3 3 b b b b b b b b b b 0 0 0 1 0 0 2 0 0 2 1 0 2 2 0 2 2 1 2 2 2 1 1 0 1 1 1 2 1 1 b b b b b b b b b b T [3|2] T [2|3] Figure 3.18: “Unfolding” of the graph T [3|2] into T [2|3] touching the dimensions will only change the size of the polytope, keeping its geome-try intact. This can be seen on fig. 3.16, where the polytope of the [3|p = 1] problem retains its tetrahedral form when p increases, while the number of integer points also increases. Second, we can permute the dimensions {nl} and of the maximum height p between themselves without changing the total number of partitions or their neighbor-ing relations (and thus the structure of the configuration space). It is straightforward to see that permuting two dimensions nl and nk does not change the order relations of a partition problem (this is equivalent to a simple index exchange), but not so much for a permutation between a dimension and p, since such exchange will change the order relations and thus the whole geometry of the problem. We can visualize this exchange using a graphical example. Figure 3.17 show three partitions, hA, hB and hC, from the problems [2, 2|3], [2, 3|2] and [3, 2|2] and their stack representation. No-tice that the three stack configurations are geometrically identical, up to reflection operations. These reflection operations are equivalent to exchanging p with one of the dimensions {nl}, and doing such operation will, as we said, alter the inequalities con-trolling the partition problem, and thus the polytopes geometry, its associated graph T and its dimensionality K. Still, the neighboring relations are conserved, and the new graph T ′ can be easily mapped to the old one (see fig. 3.18 for an example for the graphs of the problems [3|2] and [3|2]). An important consequence of this propri-ety is that a d-dimensional partition problem [n1, n2, . . . , nd−1, nd|1] is equivalent to a d −1-dimensional partition problem: (3.14) [n1, n2, . . . , nd−1, 1|nd] ≡[n1, n2, . . . , nd−1|nd]. 3.4. SIMPLEX METHOD 91 Normal decomposition: It can be shown that the K-polytope of any given parti-tion problem [n1, n2, . . . , nd|p] can be decomposed in a series of non-disjoint simplices, each one identical in size and geometry to the simplex associated to the [K|p] linear partition problem. This decomposition is called the normal decomposition , and we will present it here explicitly for a [2, 2|p] planar partition problem (the principles are essentially the same for any hyper-solid partition problem). The partitions of this problem can be organized on a matrix, (3.15)  h1 h2 h3 h4  , with the order relation chains p ≥h1 ≥h2 ≥h4 ≥0 and p ≥h1 ≥h3 ≥h4 ≥0. Notice that h2 and h3 have no order relation between them. We can, then, divide the partitions of a [2, 2|p] problem into two types: those for which h3 ≥h2, and those for which h2 ≥h3. Inserting these new conditions into the inequalities above result into a new pair of order relation chains, if h2 ≥h3: p ≥h1 ≥h2 ≥h3 ≥h4 ≥0, (3.16) if h2 ≤h3: p ≥h1 ≥h3 ≥h2 ≥h4 ≥0. Each one of these chains define a 4-simplex of the normal decomposition of F[2, 2|p]. Their intersection is formed by the partitions with h2 = h3, and follows the order relations p ≥h1 ≥h2 = h3 ≥h4 ≥0. Notice that these order relations define a 3-simplex. In general, the intersection of two K-simplices of the normal decomposition will itself be a (K −κ)-simplex, where κ is the number of equalities between the hi’s used to the define the intersection. Each one of these 4-simplices is identical in geometry to the simplex S[2·2|p], asso-ciated to the linear partition problem [4|p]. They can be seen as different projections of the solutions of this linear partition problem into the ones of the original [2, 2|p] planar partition problem. Each projection (and thus each simplex of the decomposition) is associated to a different order relation chain. We can, then, alleviate the notations identifying each simplex by a simplex index list, formed by the indexes of the hi’s of the corresponding order relations chain. For the relations shown in eqs. 3.16, we have h1 ≥h2 ≥h3 ≥h4 →simplex (1 2 3 4) (3.17a) h1 ≥h3 ≥h2 ≥h4 →simplex (1 3 | 2 4). (3.17b) We will explain the meaning of the vertical black bar in a few moments, for now it is enough to consider only the index lists. They effectively show how to obtain the planar partitions of a simplex from the reordering of the heights of the associated linear partitions, and each can be interpreted as “path” to follow when converting a linear partition into a planar one. For a practical example, take the linear partition 92 CHAPTER 3. PLANAR PARTITIONS AND QDM’S (a) 1 2 3 4 (b) 1 2 3 4 (c) Figure 3.19: Normal decompositon of the [2, 2|2] planar partition problem: graphs of (a) the [4|2] partition problem, (b) the (1 2 3 4) simplex and (c) the (1 3 | 2 4) simplex. The ”zigzag” diagrams show how the partitions of the linear problem should be read to build the simplices (1 2 3 4) and (1 3 | 2 4), and the gray points represent the interface between the simplices. of descents 0 1 2 Simplexes (1 2 3 4 5 6) (1 2 4 | 3 5 6) (1 4 | 2 3 5 6) , (1 2 4 5 | 3 6) (1 4 | 2 5 | 3 6) Table 3.1: Simplices of the normal decomposition of the partition problem [2, 3|p].. (5, 2, 0, 0). Following the order of the first index list, we can build the corresponding planar partition: h1 = 5, h2 = 2, h3 = 0, h4 = 0 →  5 2 0 0  . Using the second index list, we find h1 = 5, h3 = 2, h2 = 0, h4 = 0 →  5 0 2 0  . Figure 3.19 shows the application of this procedure for the [4|2] linear partition prob-lem, whose graph is shown in fig. 3.19a, and the two simplices of the [2, 2|2] planar problem (figs. 3.19b and 3.19c). Notice that the structure of the graph T [4|2] is con-served inside each normal simplex - in another words, two neighboring linear partitions xA and xB will generate neighboring planar partitions hα and hβ. While we limited these figures to the case p = 2, this reasoning is still valid for any values of p, since the geometry of the polytope is independent from p. Let us describe how to build indexes lists like the ones in eq. (3.17) for any hyper-solid partition problem. The most straightforward simplex of a given partition problem can be found by setting the simplex indexes in an increasing simple order, (1 2 3 . . . K), just as in eq. (3.17a). Some of the other simplices can be built by permuting two indexes ij and ij+i of this “original” simplex, when it is allowed by the problem’s order relations. These permutations will create a descent in the otherwise increasing 3.4. SIMPLEX METHOD 93 (1234) (13|24) h2 = h3 (a) (123456) (124|356) (14|2356) (1245|36) (14|25|36) h2 = h3 h2 = h4 h3 = h5 h2 = h4 h3 = h5 (b) Figure 3.20: Simplices of the normal decomposition of a (a) [2, 2|p] and a (b) [2, 3|p] partition problem, organized on a graph. Each vertex represents a simplex, and the edges link the simplices sharing an interface defined by one equality hij = hij+i list, and they are usually marked by a black bar on the index list (as in eq. (3.17b)). Further permutations on the new simplex index lists will generate new simplices and will either move the descent inside the index list, or create new ones. Table 3.1 shows for example the simplices for the [2, 3|p] partition problems, organized by their number of descents. Now that we know how to index the simplices properly, we can represent graphically the structure of a normal decomposition by constructing a decomposition graph, where each simplex is associated to a node, and two nodes are linked if they share an interface defined by hij = hij+i. Figure 3.20 shows the corresponding graphs for a [2, 2|p] and for a [2, 3|p] problem. The concept of descents is very important because they identify where two simplices have an interface defined by only one equality of the form hij = hij+i. This is specially useful when trying to determinate the number of integer points of a partition problem using its normal decomposition. All the simplices of the decomposition have the same number of integer points, but we must be careful to do not count the interfaces twice. In the notation used above for the simplex index lists, each one of the largest interfaces is marked only once - with a black bar on a index list - and so we can use the descents to track them. This is the idea behind the Ehrhart polynomial , which gives to us the number of integer points of a partition problem by taking into account the normal decomposition and the number of descents in each simplex. This polynomial was used, for example, by Destainville et al. (ref. ) to calculate the configurational entropy of classical tiling models. 3.4.2 Description of the method We will now describe the simplex method itself, which, as we said before, allows us to approximate the Hamiltonian of a [n1 · n2|p] planar QPM by an adapted [n1, n2|p] linear QPM, using the information obtained from the former problem’s normal de-composition to choose the appropriated adaptations. We will start by analyzing the block diagonalization of a [2, 2|p] planar QPM, which initially inspired us to create this method, and then present it for a general planar QPM. 94 CHAPTER 3. PLANAR PARTITIONS AND QDM’S Block diagonalization of a [2, 2|p] problem: This class of partition problems has the simplest normal decomposition, formed only by two simplices. The graph representing it, seen in fig. 3.20a, is thus symmetrical by reflection, and we can use this symmetry to block diagonalize the Hamiltonian of the associated planar QPM, ˆ H[2,2|p], into two independent blocks, one linked to a symmetrical base, noted ˆ HS (and which will contain, in general, the ground state and the first excited one), and another associated to an anti-symmetrical base, noted ˆ HA. Since the normal decomposition is independent from p, we can restrict ourselves to the [2, 2|1] planar QPM, for which the block diagonalization process is straightforward. The classical base of the Hilbert space associated to this QPM, spanned from the partitions of the associated problem, is given by the states (3.18) 0 0 0 0 , 1 0 0 0 , 1 1 0 0 , 1 0 1 0 , 1 1 1 0 , 1 1 1 1 , These kets can be organized according to their simplices: (1 2 3 4) : 0 0 0 0 , 1 0 0 0 , 1 1 0 0 , 1 1 1 0 , 1 1 1 1 ; (1 3 | 2 4) : 0 0 0 0 , 1 0 0 0 , 1 0 1 0 , 1 1 1 0 , 1 1 1 1 . The only asymmetrical states are 1 1 0 0 and 1 0 1 0 , The new orthonormal base is then Symmetric base: 0 0 0 0 , 1 0 0 0 , 1 √ 2  1 1 0 0 + 1 0 1 0  , (3.19) 1 1 1 0 , 1 1 1 1 Anti-symmetric base: 1 √ 2  1 1 0 0 − 1 0 1 0  , and finally the matrix associated to the Hamiltonian ˆ H[2,2|1], becomes block diagonal-ized, ˆ H[2, 2|1] →  ˆ HS ˆ HA  =         V −t −t 3V − √ 2t − √ 2t 2V − √ 2t − √ 2t 3V −t −t V 2V         . In terms of the normal decomposition, this base transformation can be seen as two “combinations” of the simplices (1 2 3 4) and (1 3 | 2 4), with weights equal to (1, 1) for the symmetrical base and weights (1, −1) for the anti-symmetrical base. Notice that, due to the positive weights, the symmetrical base has exactly the same number of elements as one of the simplexes, and each of them correspond to one of the partitions 3.4. SIMPLEX METHOD 95 of the linear [2 · 2|1] problem. We can, then, do a projection between the symmetrical base and the base elements of the linear QPM associated to the [4|1] problem: (3.20) [4|p] QPM base |0000⟩ |1000⟩ |1100⟩ ↓ ↓ ↓ sym. base: 0 0 0 0 , 1 0 0 0 , 1 √ 2  1 1 0 0 + 1 0 1 0  , [4|p] QPM base |1110⟩ |1111⟩ ↓ ↓ sym. base: 1 1 1 0 , 1 1 1 1 . We can inverse this projection, and visualize ˆ HS in terms of the Hilbert space spanned by the quantum [4|1] linear partitions. This will correspond to an adapted QPM, with a Hamiltonian ˆ HSmplx identical in form to the one of the original linear QPM, but with re-weighted coefficients, reflecting the structure of the original planar partition QPM. Indeed, it is easy to build ˆ H[4|1] and compare it to ˆ HS to see that the only difference between them is on the weights of the coefficients. Seen from this angle, the block diagonalization done above (which, we recall, is valid for any maximum height p) is equivalent to restricting the study of the [2, 2|p] planar QDM to a smaller part of its Hilbert space which is equivalent to the Hilbert space of an adapted [4|p] linear QDM. Simplex method: We would like to generalize the projection done above into a method applicable for other planar partition problems, which we will call the simplex method due to its dependency on the normal decomposition of the configuration space into simplices. This would allow us to reduce the size of the matrix that must be considered when calculating the ground state and the energy of the first excited state through exact diagonalizations (ED), and thus reducing the calculation time. We want, then, to exchange the Hamiltonian given in eq. (3.7), defined over the whole Hilbert space of the [n1, n2|p] planar QPM, by a Hamiltonian (3.21) ˆ HSmplx = − X A X ⟨B⟩A t′ AB |A⟩⟨B| + V X A n′ A |A⟩⟨A| , defined on the Hilbert space spanned by the [n1 ·n2|p] linear partition problems. Here, |A⟩and |B⟩are the base states defined by the partitions of the linear problem, and the coefficients t′ AB and n′ A are to be determined through the base transformation -essentially, they must take into account the neighboring relations of the planar par-titions {Ai} and {Bi}, created by mapping the linear partitions |A⟩and |B⟩into the simplices of the normal decomposition. Unfortunately, the symmetry by reflection used for the [2, 2|p] problem is no longer also a symmetry between all the simplices of the normal decomposition for other larger planar partition problems. This is reflected, for example, on the decomposition graph of the [2, 3|p] problem, fig. 3.20b, which is only symmetrical by reflection over an horizontal line. This means that any projection of the planar partitions into a single simplex will only be exact for the [2, 2|p] partition problems. Still, we can use this to approximate the energies for more general linear partition problems. 96 CHAPTER 3. PLANAR PARTITIONS AND QDM’S We propose, then, the following procedure to do this approximation of a [n1, n2|p] planar QPM. Let us note the projected planar partitions as (3.22) |Ck⟩= M X i=1 ηi |Ak,i⟩, where the sum is over all the M simplices of the normal decomposition. Each ket |Ak,i⟩is the mapping of a linear partition |Ak⟩into the i-th simplex of the normal decomposition, with the |Ak⟩’s forming the orthonormal base of the [n1 ˙ n2|p] linear QPM. If the weights ηi, which must reflect the structure of the decomposition graph, are all positive, we are guaranteed to have a total of M non-zero states |Ck⟩, each equivalent to a single linear partition |Ak⟩. These projected states are by construction mutually orthogonal, and so we will use them to define an orthogonal base of the region of the planar QPM’s Hilbert space that will be projected into the linear QPM’s Hilbert space. We must now choose the weights ηi in such a way that we re-obtain the block diagonalization seen above when the simplex method is applied to the [2, 2|p] case. To do so, we chose to use the adjacency matrix M of the decomposition graph. The elements of this matrix are defined by the equation Mij = ( 1 if the simplices i and j are connected on the graph; 0 else. For the graphs seen in fig. 3.20, we have the adjacency matrices M[2,2|p] =  0 1 1 0  and M[2,3|p] =       0 1 0 0 0 1 0 1 1 0 0 1 0 0 1 0 1 0 0 1 0 0 1 1 0       . The Perron-Frobenius theorem guarantees to us that the adjacency matrices have always a single eigenvector with all of its elements positive, associated to the highest eigenvalue, which we will use to define the weights ηi. Now that we have the weights ηi, we can finish building the states {|Ck⟩}, build an orthonormal base with them and determinate the coefficients t′ AB and n′ A through a base transformation, building the adapted Hamiltonian ˆ HSmplx. Notice, however, that while the |Ck⟩’s are orthogonal between themselves, we have no guarantees that they are orthogonal to the rest of the transformed base of the full planar QPM Hilbert space. It is straightforward, though, to see that the simplex method will reduce to the block diagonalization of the [2, 2|p] case if we choose the ηi’s this way: the eigenvectors of M[2,2|p] are (1, 1) and (1, −1), which correspond exactly to the symmetrical and anti-symmetrical weights used above. 3.4.3 Implementation and application of the simplex method Let us present now some of our results obtained through the simplex method. We wrote an algorithm that constructs the decomposition graph and the adjacency matrix of the given planar partition problem, and then projects the partitions of the equivalent linear problem into the simplices of the normal decomposition, allowing us to build the new 3.4. SIMPLEX METHOD 97 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -3 -2 -1 0 1 2 3 ✁ E/t V/t [2,2|3], ED [2,3|2], ED [2,2|3], smplx [2,3|2], smplx Figure 3.21: Gaps obtained through the simplex method (”smplx”) and the full ED (”ED”) for the [2, 2|3] and the [2, 3|2] planar quantum partition models. base {|Ck⟩} and the adapted Hamiltonian ˆ HSmplx. From this point, we obtained the first few energy levels through diagonalizations. Both the application of the simplex method and the diagonalization are memory- and time-intensive, even if we do not save any information about the full planar QPM orthonormal base. We decided to parallelized the simplex method code using the Message Passing Interface (MPI) library Open MPI, allowing us to use a computer cluster to mitigate these problems. For the diagonalizations, we used the sparse linear algebra libraries PETSc and SLEPc, which are also use the MPI libraries. To test our algorithm, we first compared the results of the simplex method to the ones obtained through ED’s for the [2, 2|3] planar QPM, which must be identical. Figure 3.21 shows the results for gap, and we can see that the simplex method (blue asterisks) coincides very well with the ED (black curve). We also plotted on this figure the gaps for the [2, 3|2] planar QPM, which has a configuration space equivalent to the one of the [2, 2|3] (section 3.4.1), and thus the same spectrum. The exact diagonalizations (black X’s) agree well with the results for the [2, 3|2] planar QPM, as expected, but the same cannot be said about the simplex method (red curve). The latter has the same qualitative behavior as the other results, but the effects of the approximations done are visible. This system is too small (only 10 plaquettes) to allow us to do any serious deduc-tions about the behavior of the QPM, but we can already identify a transition near the RK point, V/t = 1, where the gap drops to zero and the ground state is the equiv-alent of the staggered phase for the quantum dimer models under partition boundary conditions. This behavior near the RK point is retained for larger partition problems, such as the [3, 3|3], [4, 4|4] and the [5, 5|4] cases (fig. 3.22), but we see a considerable 98 CHAPTER 3. PLANAR PARTITIONS AND QDM’S 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 -3 -2 -1 0 1 2 3 ✁ E/t V/t [3,3|3] ED smplx (a) 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 -3 -2 -1 0 1 2 3 ✁ E/t V/t [4,4|4] ED smplx (b) 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 -1.5 -1 -0.5 0 0.5 1 1.5 ✁ E/t V/t [5,5|4] smplx (c) Figure 3.22: Gaps obtained through the simplex method (”smplx”) and the full ED (”ED”) for the (a) [3, 3|3] and (b) the [4, 4|4] planar quantum partition models, and through the simplex method for the (c) [5, 5|4] planar QPM. different behavior for V/t ≪0. The behavior in this region depends on whenever the partition problem has one or two partitions that maximizes the number of flips, which is (are) the ground state(s) at the V/t →−∞limit. In the latter case, valid for the [3, 3|3] and the [2, 2|3] problems, the ground state at this limit is degenerated, in thus we have a decreasing gap. Full QPM Full Hilbert Reduced QPM Reduced Hilbert Ratio dimensions space size dimensions space size [2, 2|3] 50 [4|3] 35 70% [2, 3|2] 50 [6|2] 28 56% [3, 3|3] 980 [9|3] 220 22.5% [4, 4|4] 232.484 [16|4] 4.845 2% [5, 5|4] 16.818.516 [25|4] 23.751 10−4% Table 3.2: Sizes of the full Hilbert space of a QPM and the corresponding reduced Hilbert space obtained through the simplex method. The simplex method results into a considerably smaller Hilbert space, and thus a smaller Hamiltonian ˆ HSmplx to be diagonalized: the number of simplices of the normal decomposition of a problem [n1, n2|p] increases with its size, and so the ratio between the dimensions of ˆ HSmplx and ˆ HQPM will decrease as the partition problem increases, which is a very advantageous trait. Table 3.2 shows a comparison between theses dimensions for some systems that we simulated, and for the largest one, corresponding to a [5, 5|4] problem, we had a reduction of 10−4%. Also, the approximation done by it is not bad, agreeing qualitatively with the exact diagonalizations, and presenting differences due to the fact that it didn’t truly block-diagonalize the original planar QPM Hamiltonian, ˆ HQPM. Still, one strong limitation of our implementation of it is the necessity of building the projection of each simplex of the decomposition. At worst, this step scales with {number of simplices} · dim( ˆ HSmplx), which is larger than dim( ˆ HQPM). For the larger systems, the application of the simplex method itself 3.4. SIMPLEX METHOD 99 consumed more time than the exact diagonalization of ˆ HSmplx, although the combined time was smaller than the time needed to diagonalize ˆ HQPM with the same numerical precision. The amelioration of this construction step, together with a better choice of the simplex weights done here, are the two main paths for the refinement of the simplex method. 100 CHAPTER 3. PLANAR PARTITIONS AND QDM’S Chapter 4 Classical planar partitions: from the amoebae to the arctic circle In section 3.2.1, we presented the equivalence between the planar partition models and classical models such as the rhombus tilings and the classical dimer coverings. Another classical model that can be represented by the planar partitions is a constrained growth model : using the stack representation of a [n1, n2|p] planar partition problem, seen in fig. 3.3, one can map a partition h = {hk} into a stacking of small cubes inside a large box of dimensions n1 × n2 × p, each integer hk representing the height at the position k (the number of vertically stacked cubes). We can then focus on the piece-wise linear interface (made of the upper small cubes square faces) and analyze its typical shape (with a large box size, this interface is smoothed out). When analyzed as a function of the partition total height (the sum of all parts) this planar partition model displays two typical shapes. Viewed as a height model, consider the case where the total height stays small enough when compared to the edge lengths that the extremities of the edges are essentially never reached. Starting from an empty partition and sequentially increasing the total added part (the “growth inside a corner” model) the corresponding growing interface approaches a mathematical interface called an “amoebae” . Such a shape is also found in the dual model of crystal corner melting, where initial parts have initially their maximal value. Olerjarz et al. proposed in ref. an equation describing the evolution of such interfaces. Now, Figure 4.1: Illustration of the interface growth model proposed by Olerjarz et al.. Figure taken from . 101 102 CHAPTER 4. CLASSICAL PLANAR PARTITIONS when the partition’s total height is high enough to make the system sensitive to the existence of the full bounding box, another quite different asymptotic shape is found. Viewed as a rhombus tiling with no restrictions on the total height, one finds the arctic circle phenomenon (described briefly in 3.2.1), which separates fluctuating regions from totally frozen regions. Here we will study a thermodynamic classical partition model, using analytical calculations and Monte Carlo simulations (based on a straightforward Metropolis-Hastings algorithm), which is shown to display such asymptotic shapes as temperature is increased, with a crossover instead of a sharp transition. However, transition regimes (named interface transitions here) can be identified when looking to more local parameters. 4.1 Description of the constrained corner growth model Let us start by describing the constrained corner growth model and its general behav-ior. Consider a [Lx, Ly|Lz] planar partition problem, and associate the dimensions Lx and Ly to the axis ˆ x and ˆ y of the box containing the interface, and Lz to the vertical axis. Here, we will focus on the symmetrical case, where Lx = Ly = Lz = L, but for completeness we will present the definitions in the more general notation. Define the energy HTot of a given partition by the sum of its integer parts (or, in this context, heights), (4.1) HTot [h] = X k hk. The partition function of this model reads (4.2) Z = X h∈C e−βHTot[h], where β = 1/T is the inverse temperature and C is the ensemble of partitions of a given problem. At a given temperature T, each partition has a weight e−βH[h]/Z, and the average total height reads (4.3) ¯ HT = X i hi + = P h∈C e−βH[h]HTot [h] Z , It happens that the partition function is exactly known here, thanks to the MacMahon generating function for planar partitions , which is given by (4.4) B (q, Lx, Ly, Lz) = Lx Y i=1 Ly Y j=1 1 −qi+j+Lz−1 1 −qi+j−1 The generating function is a polynomial in q, such that the integer factor in front of qp counts the number of partitions with p as a total height. Since the Hamiltonian here is precisely the total height, this generating function gives the partition function as (4.5) ZLx,Ly,Lz = B  e−β, Lx, Ly, Lz  4.2. THERMODYNAMIC LIMIT 103 While we proposed this constrained corner growth model for the planar partitions, there is nothing that forbids us to apply it to other partition problems. In particular, we will consider the linear (or 1D) partitions [Lx|Ly], and use it as a “warm up” for a more detailed description of the planar case, in section 4.3. The MacMahon generating function and the corresponding partition function for the 1D case are (4.6) B (q, Lx, Ly) = Lx Y i=1 1 −qi+Ly−1 1 −qi−1 , ZLx,Ly = B  e−β, Lx, Ly  . Back to the planar model, the evolution with the temperature of the interface as-sociated to this model can be observed in fig. 4.2, that shows the evolution of the average height interface for a planar partition with L = Lx = Ly = Lz = 240 as a function of the temperature T. For low temperatures, the average height interface has a characteristic amoebae form (fig. 4.2a) that grows with the temperature without changing considerably its shape or its concavity, as long as the system’s boundaries are relatively far from the amoebae’s “core” (fig. 4.2b). For T/L ≃0.1 ∼0.5 a crossover is observed, from the growing amoebae shape to an “inflated” amoeba interface (figs. 4.2c and 4.2d). This is accompanied by the appearance near the system’s edges of a re-gion of maximal height that grows with increasing temperature. For asymptotically high temperatures the average height approaches the so called arctic circle configu-ration (figs. 4.2e and 4.2f), characterized by non-fluctuating (frozen) regions both at zero and maximal heights, separated by a fluctuating region. This region assumes an asymptotically perfect circular form when seen from the angle presented on fig. 4.2f, in the same way to what we have seen in the previous chapter for the rhombus tiling (see fig. 3.5), but it is not a flat interface (see fig. 4.3a). For all temperatures, we can identify up to three regions on each interface, marked on fig. 4.3b for T = 10000, where the interface’s height is (I) frozen and equal to the maximum value possible, hk = Lz; (II) between the maximum and minimum values, 0 < hk < L; and (III) frozen and equal to the minimum value possible, hk = 0. All interfaces present at least the regions (II) and (III), with the exception of the limit T →0, where the interface is defined by the empty partition. The presence of the frozen minimal region for low temperatures can be easily understood based on energetic considerations only. The presence of these frozen regions at high temperatures are more subtle, associated with the interplay between entropic considerations and boundary conditions. 4.2 Thermodynamic limit Figure 4.2 shows qualitatively the transformation between the amoebae to the arctic circle interfaces. To study it more precisely, we will measure, among other observables, the total average height, ¯ HT , defined on eq. (4.3). Figures 4.4a and 4.4c shows ¯ HT obtained through Monte Carlo simulations for, respectivelly, 1D and 2D partitions and different system lengths L. In both cases there are three different regimes: (1) for very low temperatures, T ≪1, ¯ HT follows the asymptote e1/T /(e1/T + 1); (2) for temperatures small compared to the system’s size, L, ¯ HT is proportional to T d+1, with d = 1, 2; and (3) for high temperatures, ¯ HT tends to an asymptotic value of L(d+1)/2. The behavior in the first region can be explained by the fact that, for very 104 CHAPTER 4. CLASSICAL PLANAR PARTITIONS (a) T = 11.2 (b) T = 15.8 (c) T = 44.7 (d) T = 79.4 (e) T = 562.3 (f) T = 10000 Figure 4.2: Average height for a planar partition model at different temperatures T, for a system of size L = n1 = n2 = p = 240. The level curves are intersections of the interface with the spheres centered at the point (L/2, L/2, L/2). (a) T = 11.2 (b) T = 15.8 Figure 4.3: Other view angles of the average height for a planar partition model at T = 10000, for a system of size L = n1 = n2 = p = 240. The projection of this interface on the XY plan can be divided in three regions, with different heights hk: (I) hk = L; (II) 0 < hk < L; and (III) hk = 0. 4.2. THERMODYNAMIC LIMIT 105 (a) (b) (c) (d) Figure 4.4: Total height ¯ HT for the d = 1 (a,b) and the d = 2 (c,d) partitions, obtained from Monte Carlo simulations, with different system sizes (L = 300, 1500 for d = 1 and L = 30, 60, 120, 240 for d = 2). Figures (a) and (c) have are no re-scaling factors. The black lines are the asymptotes for t = T/L ≪1 (respectively, ¯ HT ≃ζ(2)·T² and ¯ HT ≃2ζ(3)·T 3), and the dashed lines is the asymptote for T ≪1, exp(−1/T)//[1 + exp(−1/T)], which is identical for both d = 1 and d = 2. On figures (b) and (d), the temperature T is re-scaled by L and ¯ HT by the bounding box volume Ld+1, and the black lines show the asymptotic behavior for L →∞(eq. (4.16) for d = 1 and eq. (4.17) for d = 2) 106 CHAPTER 4. CLASSICAL PLANAR PARTITIONS low temperatures, the weight of all the non-empty partitions is very small compared to the one of the empty partition, so we can approximate ¯ HT by taking only the latter and the partition with the first height equal to one, obtaining the asymptote (4.7) ¯ HT≪1 = 0 · e−β0 + 1 · e−β e−β0 + e−β = e1/T e1/T + 1. Notice that, up until the end of the second region, all the curves are superimposed. In these regions, the amoebae interface grows without being affected by the boundaries of the box surrounding it. In the third region, these boundaries start to affect the interface, and we have a transition towards the arctic circle interface. We can better describe this transition by taking the thermodynamic limit L →∞and plotting the re-scaled average total height ¯ hT as a function of the re-scaled temperature t, with (4.8) ¯ hT = ¯ HT Ld+1 and t = T/L. It is useful here to calculate the analytical form of ¯ hT in the thermodynamic limit. We can do this by taking the continuous limit of the MacMahon generating functions, eqs. (4.4) and (4.6). In order to do so, let us consider here the following re-scalings of the model’s parameters: (4.9) Lx = ℓxL, Ly = ℓyL, Lz = ℓzL (for 2D partitions), and T = tL. Substituting these on eqs. (4.4) and (4.6) and taking the large L limit, we obtain the partition functions of the continuous 2D and 1D models, ZLx,Ly,Lz = exp   Lx X i Ly X j ln 1 −e−(i+j+Lz−1)/T 1 −e−(i+j−1)/T !  ≃exp " L2 Z ℓx 0 dx Z ℓy 0 dy ln 1 −e−(x+y+ℓz−1/L)/t 1 −e−(x+y−1/L)/t !# = exp L2Fℓx,ℓy,ℓz , (4.10) and (4.11) ZLx,Ly ≃exp " L Z ℓx 0 dx ln 1 −e−(x+ℓy−1/L)/t 1 −e−(x−1/L)/t !# = exp LFℓx,ℓy . The scaled free energies Fℓx,ℓy and Fℓx,ℓy,ℓz can be written using the polylogarithm function Lin (z) = P∞ k=1 zk kn and the Riemann zeta function as Fℓx,ℓy =t  −ζ(2) −ℓxℓy t2 −Li2  e ℓx+ℓy t  + Li2  e ℓx t  + Li2  e ℓy t  , (4.12) Fℓx,ℓy,ℓz =t2  ζ(3) −ℓxℓyℓz t3 −Li3  e ℓz+ℓx+ℓy t  + Li3  e ℓx+ℓy t  + Li3  e ℓz+ℓx t  +Li3  e ℓz+ℓy t  −Li3  e ℓx t  −Li3  e ℓy t  −Li3  e ℓz t i . (4.13) 4.3. BOUNDARY TRANSITIONS 107 In both the 1D and the 2D models, the average total height is given by (4.14) ¯ HT = −∂β ln Z = 1 L∂t−1 ln Z, and thus ¯ hT is equal to (4.15) ¯ hT = ¯ HT Ld+1 Q i ℓi = − 1 Q i ℓi ∂t−1F, where F is either eq. (4.12) or eq. (4.13), depending on the dimension of the model. Finally, we can get the thermodynamic limit in the symmetrical case by taking ℓi = 1, obtaining ¯ hT,1D =1 −t2  ζ(2) + 2 t ln  1 + e1/t + 2Li2  −e1/t , (4.16) ¯ hT,2D =1 + t3  2ζ + 3 t  Li2  e1/t −2Li2  e2/t + Li2  e3/t −2  3Li3  e1/t −3Li3  e2/t + Li3  e3/ti . (4.17) Equations (4.16) and (4.17) are both analytic functions of t, with the limits (4.18) lim t→∞ ¯ hT = 1 2, lim t→0 ¯ hT = ( ζ(2)t2 for 1D partitions, 2ζ(3)t3 for 2D partitions. The re-scaled data is plotted on figs. 4.4b and 4.4d, together with the analytical formulas obtained from eqs. (4.12), (4.13) and (4.15). In both cases, the ¯ hT obtained through Monte Carlo simulations re-scale very well with the analytical result, and specially with the limits shown in eq. (4.18): the curves present a smooth crossover around t ≃1, from a power-law following the limits t →0 to the asymptotic approach of the saturated value ¯ hT = 1/2 for t →∞. This saturated value can be easily explained using entropic arguments. The distribution of the re-scaled total height hT of a partition is symmetric around the 1/2 value: for each partition with hT = h, there is an “inverted” partition with hT = 1 −h. Since at infinite temperature all the configurations have the same weight, this results into a mean total re-scaled height ¯ hT equal to 1/2. Also, the former asymptotes are in accordance with what we have seen in figs. 4.4a and 4.4c. Since the first derivative of the free energy with respect to the inverse temperature β is proportional to ¯ hT , shown above to be always smooth, no phase transition in the thermodynamic sense arises as a function of temperature, both in the 1D and 2D cases. The appearance and disappearance of frozen regions, presented on fig. 4.3b have therefore no signature in global thermodynamic quantities. 4.3 Boundary transitions 1D problems: We now turn to the detailed analysis of the spatially resolved average height. Let us consider first the 1D case, to verify the validity of the MC algorithm. 108 CHAPTER 4. CLASSICAL PLANAR PARTITIONS Figure 4.5: Generic representation of the linear partitions of the problem [Lx|Ly] with an height Y at the position X. Indeed, in this case, we can deduce easily from eq. (4.6) an analytical formula for the height profile ¯ HX in the thermodynamic limit. For a given linear [Lx|Ly] partition problem, the mean height at the position X is given by ¯ HX = P Y Y P X,Y Lx,Ly, where P X,Y Lx,Ly is the probability of having a configuration with a height at the position X equal to Y . This probability will be equal to the statistical weight of these partitions, which we will note ZX,Y Lx,Ly, divided by the partition function ZLx,Ly. Figure 4.5 shows a graphical representation of such configurations, which we can divide into a X × Y rectangle, which is common to all the partitions with a height Y at the position X (region A), and two parts that will vary for each configuration. The possible config-urations inside these regions can, themselves, be described by smaller linear partition problems: one, to the right of the X × Y rectangle, is bounded by its height Y and to a length Lx −X (region B), and the other, found above the rectangle, is bounded by its length minus one, X −1, and by the height Ly −Y (region C). From this, we can induce that ZX,Y Lx,Ly is equal to the statistical weight of a X ×Y partition (the rectangle A) times the partition functions of the [Lx −X|Y ] and [X −1|Ly −Y ] linear problems (corresponding respectively to regions B and C). We have then the probability (4.19) P X,Y Lx,Ly = qXY ZX−1,Ly−Y ZLx−X,Y ZLx,Ly . Let us consider the case Lx = Ly and take again the continuous limit, with (4.20) X = xL, Y = yL, and T = tL. The re-scaled height profile ¯ hx = ¯ HX/L is then equal to ¯ hx = 1 L X Y Y P X,Y Lx,Ly = 1 L X Y Y qXY ZX−1,Ly−Y ZLx−X,Y ZLx,Ly ≃ Z 1 0 dy y exp h L  −xy t + Fx,1−y + F1−x,y −F1,1 i , (4.21) 4.3. BOUNDARY TRANSITIONS 109 where the F functions are given by eq. (4.12). This integral can be approximated by its saddle point value, which obeys the relation (4.22) ∂y h −xy t + Fx,1−y + F1−x,y −F1,1 i = 0. Developing it we find the implicit function (4.23) e(1+x+y)/t + e(x+y)/t −e(1+x)/t −e(1+y)/t = 0, which describes the height profile ¯ hx = y for a given re-scaled temperature t. (a) (b) Figure 4.6: Height profile of a 1D partition, with (a) L = 300 and (b) L = 1500, for different temperatures. The points were obtained through the MC simulations, and the curves represent the analytical results, drawn using eq. (4.23). Figure 4.6 depicts the mean value of the height hi as a function of the position for different temperatures, for L = 300 and L = 1500, obtained through the Monte Carlo simulations (points) and through eq. (4.23) (curves). For low and large temperatures the analytic and numerical results are in very good accord. The discrepancy for in-termediate temperature values can be explained by the difficulty of the Monte Carlo algorithm to converge due to the long auto-correlation time observed for these region of parameters, but they are qualitatively correct. Notice that, in both cases, the shape of the height profile (which is the equivalent of the interface for the 1D case) does not change its form. 2D problems: The 2D case is considerably different from the one dimensional one, as the mean local height curve undergoes the shape transition depicted on fig. 4.2. Here the mean height ¯ hr = ¯ hx,y presents three different behaviors, as a function of its coordinates, and which can be identified by the regions shown on fig. 4.3b: at the region (I), near the origin, two transitions of the local mean height are observed. The first, at t = t0, indicates when ¯ hk passes from a vanishing height to a finite value, 0 < ¯ hr < 1. After the second transition, arising at a re-scaled temperature t = t1 > t0, the mean height is locked to its maximal value ¯ hr = 1. In region (II), arising for intermediate 110 CHAPTER 4. CLASSICAL PLANAR PARTITIONS Figure 4.7: Comparison between the diagonal height ¯ hs for amoebae interfaces, and the diagonal height profile of the interface conjectured in ref. . values of (x, y), only the first transition is observed. Finally in region (III), ¯ hr = 0 for all values of t. For concreteness we concentrated our numerical studies for the height along the diagonal x = y, defined as (4.24) hs = 1 L2 h(X=sL,Y =sL), with s ∈[0, 1]. While the ”growth inside a corner” problem is physically different from the present thermodynamical approach (the former has no temperature), both present amoebae-like shapes, and so we find it interesting to compare, for low temperature (t < 1) more precisely our results for ¯ hs with the shape conjectured by Olejarz et al. in ref. . Since the notation in their article conflicts with the one used here, we will mark their variables with a tilde. In their work, they study the growth of an interface on a corner as a function of the time ˜ t, obtaining, among other results, a formula for the height ˜ z (equivalent to our ¯ hs) of the interface over the diagonal plane ˜ x = ˜ y (equivalent to s, in our notation), (4.25) ˜ x ˜ t = 1 2 ˜ z ˜ t −3 4  ˜ z ˜ t 2/3 + 1 4. Figure 4.7 shows this equation (dashed line) and some results from our simulations for L = 240 and low temperatures. On this figure, we chose the re-scalings in such a way that ˜ z/˜ t = ¯ hs/¯ h0 and ˜ x/˜ t = s/¯ h0, allowing us to do a comparison between the MC data and eq. (4.25). While the data does not always match Olejarz et al. results, all the curves have a similar behavior, indicating that the interfaces seen on figs. 4.2a to 4.2c follow the conjecture of ref. at least qualitatively. Whether the shape 4.3. BOUNDARY TRANSITIONS 111 0 0.2 0.4 0.6 0.8 1 1.2 0.001 0.01 0.1 1 10 hs/L T/L s = 1/15 s = 1/2 L = 30 L = 60 L = 120 Figure 4.8: Diagonal height ¯ hs, with s = 1/15 and s = 1/2, as a function of T/L. obtained by us is quantitatively different from the one obtained in their work or not is an open question, certainly worth being further studied Let us pass now to the transition between the amoebae interface and the one presenting the arctic circle phenomenon. Figure 4.8 shows the re-scaled value of the height ¯ hs = ⟨hs⟩along the diagonal for s = 1/15 and s = 1/2, as a function of t. They correspond respectively to regions (I) and (II), and the Monte Carlo results obtained for different values of L clearly indicate the height transitions, expected at the thermodynamic limit. To describe it more exactly, we can study the diagonal height’s fluctuations. Figures 4.9a and 4.9b show the behavior of the fluctuations of the mean height characterized by the variance σ2 = L2 h h2 s −⟨hs⟩2i , for different values of L and again for s = 1/15 and s = 1/2. The Monte Carlo data shows that the variance vanishes at the thermodynamic limit for t < t0 or t > t1, in the former case, and for t < t0 in the latter. This results is consistent with the freezing of the average height to hs = 0 or hs = 1, respectively. For t0 < t < t1 there is a fluctuational regime where characterized by a finite value of σ2 that seems to converge in the thermodynamic limit. Typically, near continuous thermodynamic phase transitions the susceptibility defined as χ = σ2 diverges at the thermodynamic limit as χ ∝|t −tc|−γ. Interestingly at the freezing transition observed in Figs.4.9a,4.9b, the susceptibility does not diverge. This fact indicates a negative value of the critical exponent γ. We reiterate here that this is a transition observed in a local quantity and thus it does not a represent a phase transition in the thermodynamic sense. In order to pinpoint the temperatures at which the freezing transitions occur for 112 CHAPTER 4. CLASSICAL PLANAR PARTITIONS 1e-06 1e-05 0.0001 0.001 0.01 0.1 1 10 100 0.01 0.1 1 10 Variance, ✁ 2 T/L L = 30 L = 60 L = 120 L = 240 (a) s = 1/15 1e-06 1e-05 0.0001 0.001 0.01 0.1 1 10 0.01 0.1 1 10 Variance, ✁ 2 T/L L = 30 L = 60 L = 120 L = 240 (b) s = 1/2 Figure 4.9: Cumulants for the 2D partitions for L = 30, 60, 120. Figures (a) and (b) show the variance σ2 (equal to the second cumulant κ2), for the diagonal positions s = 1/15 and 1/2, respectively. 0.0001 0.001 0.01 0.1 1 10 100 1000 10000 100000 1e+06 0.01 0.1 1 10 ✁ 2 T/L L = 30 L = 60 L = 120 L = 240 (a) s = 1/15 0.001 0.01 0.1 1 10 100 1000 10000 100000 1e+06 0.01 0.1 1 10 ✁ 2 T/L L = 30 L = 60 L = 120 L = 240 (b) s = 4/5 Figure 4.10: Kurtosis γ2 = κ4/σ4 for the diagonal positions s = 1/15 and 4/5. The crossings of the kurtosis allow us to determinate the critical temperatures T0 and T1. 4.3. BOUNDARY TRANSITIONS 113 each height ¯ hs, we can use the kurtosis for the mean height, defined as (4.26) γ2 = κ4 σ4 where κ4 is the fourth cumulant of hs, (4.27) κ4 = L4 h h4 s − h3 s h1 s − h2 s 2 + 12 h2 s h1 s 2 −6 h1 s 4i . Near a continuous phase transition this quantity, considered as a function of t and L, γ2 (t, L), is expected to show scale free behavior at the transition point, tending towards a critical value γc: (4.28) lim L→∞γ2 (tc, L) = lim L→∞γ2 (tc, bL) = γc. We can, then, use it to determinate the values t0 and t1. Figures 4.10a and 4.10b show γ2 for s = 1/15 and s = 4/5 as a function of t, where the Monte Carlo data suggests a crossing of the curves. Using these crossings, we determined the values of tc for the different freezing transitions, depicted on fig. 4.11 as a function of s. The curves of t0 and t1 follow the asymptotes of the thermodynamic limit, given by the arctic circle phenomenon and equal to s = 0.146 and s = 0.854, and they allow us to clearly differentiate the three regions (I), (II) and (III). 0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.2 0.4 0.6 0.8 1 0.1464 0.854 [ Critical temperature ] / L s T0 T1 Figure 4.11: Critical temperatures, T0 and T1, for different positions on the partition’s diagonal, s. The vertical lines indicate the analytic asymptotes. 114 CHAPTER 4. CLASSICAL PLANAR PARTITIONS Final analysis: We have shown here that the two typical shapes, the amoebae and the arctic circle, are found in thermodynamic model as temperature is raised. While this model do not show a “bulk” thermodynamic transition, behaviors characteristic of transitions are indeed found while focusing on the boundaries of the model. The partition models are not limited to one and two dimensions, but can be de-fined in higher dimensions. The solid partition case, which was already considered by MacMahon himself, is a rich problem, for which no simple generating function has been found. Asymptotic shapes have been studied for solid partitions [62, 63]. In that case, an “arctic” octahedral shape, represented on fig. 4.12b, has been numerically found. This shape should be recovered at the infinite temperature limit of a thermodynamic problem formulated with an energy proportional to the total height, in a similar fash-ion to what we have done in this chapter. By mapping the solid partition configuration into a height model in a fourth dimension, it is also possible here to define a “growth inside a hypercorner” model. By analogy with the 2D case, one can expect a growing interface with a shape close to a hyperbolic tetrahedron, as shown in fig. 4.12a. Such problems would be worth further study. Figure 4.12: Expected asymptotic shapes in the solid partition problem (a) for the growth inside a “hyper-corner” in 4 dimensions, the boundary is expected to have this kind of shape (a hyperbolic, concave, tetrahedron) (b) The non-frozen region for a solid partition. The 3D boundary between the frozen and non-frozen rhombohedra has an octahedral shape (taken from ref. ). Conclusion and perspectives A first objective of this thesis was to study the Quantum Dimer Models on a honeycomb lattice, in order to determinate whenever the plaquette phase of the RK model is gapped, as suggested in the literature through indirect measurements. Using a cluster update Monte Carlo algorithm, we were able to measure the quantum energy and the gap of the RK model, proving that the plaquette phase is gapped. The method used to determine these observables depends on the correlations between the different layers of the equivalent classical 3D Ising model used to run the simulations, and the gap measured is an upper bound of the real gap, which approaches the latter as the system’s temperature decreases. On most of the plaquette phase’s domain, the gap curves converge to a single curve, allowing us to confirm directly that this phase is gapped. We were not able, though, to converge these curves on a sizable region of the plaquette phase’s domain, near the RK point. On this region the gap is very small, tending towards zero as we approach this point. This effect reflects slightly on the RMS magnetization of the CIM, used originally to determinate the phase transition between the star and plaquette phases, but they can be seen mainly on order parameters such as the local and sublattice dimer density, where we can not longer identify clearly the plaquette phase. In terms of the MC algorithm, this small gap results on high correlations times between the measurements, since the MC algorithm is unable to differentiate the ground state and the first excited one. To solve this problem, we must run more simulations with temperatures low enough to differentiate these two states. From a numerical point of view, we may also use a continuous-time Monte Carlo method . This would remove the measurement error due to the Trotter-Suzuki decomposition, which discretizes the temperature, and maybe access higher temperatures more efficiently. We followed the study of the RK model with a more general version of it, which we called the V0 −V3 model, with an additional potential term proportional to the number of 0-plaquettes, and finding a rich phase diagram depending on the topological order. Adapting the Monte Carlo algorithm used for the former model was a matter of changing the acceptance probabilities to ones compatible with the new potential term, and using it we found a rich phase diagram for the V0 −V3 model, with phase transitions between different flux sectors. We analyzed qualitatively the evolution of several of these transitions using ideal, dynamical chain-like structures, which serve as building blocks of the local dimer density. We were able to identify on this phase diagram four different regions with different behaviors, all meeting at the RK point, making the region near it the most interesting of the whole phase diagram. Using 115 116 CONCLUSION AND PERSPECTIVES perturbation analysis near this point, we found a fan-like progression of the phase transitions, with a continuous variation of the ground state’s flux sector. The behavior of the fan region is compatible with the Cantor deconfinement proposed by Fradkin et al. . Going forwards, we are interested in running more simulations, for more flux sectors, to describe in greater details this model’s phase diagram. It would also be interesting to further study the phase transitions inside the each flux sector, as we did for the zero flux sector, to better describe the behavior of the ground state in each sector. Also, inside the zero flux sector, we have seen that the boundary between the star and plaquette phases becomes hard to determinate as it approaches the RK point. This phenomenon is also due to the vanishing gap seen for the RK model, and to solve it we must run more simulations with lower temperatures. Mainly, we must study in more details the complex order parameter presented on fig. 1.12, which indicates the presence of a possible U(1) symmetry near the RK point. We have also studied the RK model for dimer coverings equivalent to the so-called planar partition problems. As we have seen, the boundary conditions imposed by this equivalence results in a behavior completely different from the one expected for the typical periodic boundary conditions: the phase transition between the star and plaquette phases loses a lot of its amplitude, and we see a series of smooth transitions, composed by local transformations of the dimer covering. The presence of an interface between a star and a staggered domain results in a ground state presenting a complex structure, mixing both the star and plaquette dimer structures. We proposed an interpretation using the ideal chain structures which proved interesting for the V0 −V3 model, and allowed us to tentatively analyze why energetically this series of transitions is more advantageous than having only the typical star - plaquette transition. In view of this large difference with respect to the standard RK quantum dimer model, it would certainly be interesting to further study the V0 −V3 Hamiltonian with the constrained boundary of quantum partitions. The latter does not allow one to define different fluxes belonging to disconnected sectors, since the whole Hilbert space is connected by the flip operations. On the one hand, this allows a simple investigation of the phase diagram (we do not need to scan several flux sectors for each point in the V0 −V3 plane). On the other hand we expect a rich arrangement of the dynamical chains, mimicking those of the flux phase diagram toward the center, but with a complex arrangement toward the boundary. Also, the Hamiltonian of the QPM can be applied to partition problems other than the planar ones studied here, including the 3D (or solid) partition problems. There exists indeed a mapping between the configuration space of the solid partitions’ QPM and of a (rather complicate) 3D Ising quantum model, which, in principle, can be studied with a (3+1)D world line MC algorithm. Regarding the simplex method, the reduction of the Hamiltonian’s size obtained through it is drastic enough to motivate both further developments of it, both from the point of view of the approximations chosen and of the algorithm implementation. Finally, still in the context of the partition problems, we started to study a classical model which can be applied to the corner crystal growth and corner crystal melting 117 problems, using the planar partitions. We analyzed this model using analytical calcu-lations and Monte Carlo simulations, finding that both the amoebae surface, found in crystal corner melting models, and the arctic circle phenomenon are present on this model, at opposite sides of the phase diagram. Also, while we do not have a phase transition between them, in thermodynamic sense, we where able to identify transitions on the local mean heights, which describe the crossover between these two surfaces. An extension of this model involves considering the quantum case, possibly using the V0 −V3 model applied to the planar partition models, which we could use to study quantum crystal corner melting models. 118 CONCLUSION AND PERSPECTIVES Appendix A From the 2D quantum Ising model to a classical 3D Ising model In section 1.3.2, we have described how the Ising-type quantum model eq. (1.2) on the 2D triangular lattice can be mapped to a 3D classical Ising model on a stack of 2D triangular lattices to allow for an efficient world-line Monte Carlo simulation. Let us check here, that the partition functions and expectation values of diagonal observables do indeed coincide up to corrections that are of third order in the imaginary-time step ∆β = β/N, as claimed in eqs. (1.7) and (1.8). Based on the second order Trotter-Suzuki decomposition eλ( ˆ A+ ˆ B) = e λ 2 ˆ Aeλ ˆ Be λ 2 ˆ A + O(λ3), the quantum partition function can be expanded as ZQIM = Tr (e−∆β( ˆ Hz+ ˆ Hx))N = X σ N Y n=1 ⟨σn| e−∆β( ˆ Hz+ ˆ Hx) σn+1 = X σ N Y n=1 ⟨σn| e−∆β ˆ Hze−∆β ˆ Hx σn+1 + O(∆β3) = X σ N Y n=1 e−∆βHz(σn) Y i ⟨σn i | e∆βtˆ σx i σn+1 i + O(∆β3) where σN+1 ≡σ1. With ⟨σ| e∆βtˆ σx σ′ = cosh(∆βt)δσ,σ′ + sinh(∆βt)δσ,−σ′ and AeKτσσ′ = A eKτ δσ,σ′ + e−Kτ δσ,−σ′ we can identify ⟨σ| e∆βtˆ σx σ′ = AeKτσσ′, where e−2Kτ = tanh(∆βt) and A2 = sinh(2∆βt)/2. 119 120 APPENDIX A. FROM THE 2D QDM TO A 3D CIM Using this result and the definition A := ALN in the expansion of ZQIM (N is the num-ber imaginary-time steps and L the number of lattice sites), one obtains the connection between the quantum and the classical partition functions ZQIM A = X σ N Y n=1 e−∆βHz(σn)+P i Kτ i σn i σn+1 i + O(∆β3) = X σ e−( P n KzHz(σn)−P n,i Kτ i σn i σn+1 i ) + O(∆β3) = ZCIM + O(∆β3), with Kz and Kτ as specified in eq. (1.9). The normalization factor A cancels in the evaluation of expectation values for observables ˆ O = O({ˆ σz i }) that are diagonal in the {ˆ σz i }-eigenbasis and for which one obtains in the same way as for the partition functions ⟨ˆ O⟩QIM = Tr(e−β ˆ H ˆ O) ZQIM = P σ e−ECIM(σ)O(σ) ZCIM + O(∆β3) = ⟨O⟩CIM + O(∆β3). However, the factor A = [sinh(2∆βt)/2]LN/2 [ eq. (1.7b)] needs to be taken account of in the evaluation of non-diagonal observables such as the energy ⟨ˆ HQIM⟩of the quantum system, as described in appendix C. Appendix B Monte Carlo sampling and 1D cluster updates With the quantum-classical mapping, described in section 1.3.2, we have constructed the classical model ECIM(σ), eq. (1.6), in such a way that its partition function and expectation values of observables are identical to those of the quantum model as ex-pressed in eqs. (1.7) and (1.8). The imaginary-time step ∆β = β/N of the quantum model enters the coupling constants Kz and Kτ i of the classical model according to eq. (1.9) and the inverse temperature itself determines the number N of time slices, i.e., the extension of the classical Ising model in the time direction. The classical model is then formally sampled at βCIM = 1. In the Monte Carlo algorithm, we gener-ate a Markov chain of classical states such that each state σ occurs with a frequency that corresponds to its weight e−ECIM(σ)/Z in the classical ensemble. As explained in section 1.3.2, expectation values of diagonal observables ˆ O = O({ˆ σz i }) can then be evaluated by averaging O(σn) (any choice of the time slice n or additionally any average of the time slices n) with respect to the states generated by the algorithm. Non-diagonal observables can be addressed as exemplified in appendix C. In Monte Carlo simulations, it is essential to obey detailed balance, i.e., with the state probabilities π(σ) := e−E(σ) (in the following E(σ) ≡ECIM(σ)) and the state transition probabilities denoted by p(σ →σ′), we require (B.1) π(σ)p(σ →σ′) = π(σ′)p(σ′ →σ). Separating the transition probability into proposal and acceptance probabilities, p(σ →σ′) = P(σ →σ′)A(σ →σ′), detailed balance can be achieved by using the Metropolis choice (B.2) A(σ →σ′) := min  1, π(σ′)P(σ′ →σ) π(σ)P(σ →σ′)  . As outlined in section 1.3.3, we base the simulation on flips of 1D clusters, oriented along the time direction, in order to avoid problematically low acceptance probabilities when decreasing ∆β. This type of update is inspired by the Swendsen-Wang or Wolff cluster algorithms [65, 66]. 121 122 APPENDIX B. MONTE CARLO SAMPLING AND 1D CLUSTER UPDATES The 1D cluster updates for the time direction of the classical Ising model eq. (1.6) are equivalent to cluster updates in an Ising chain Heff= −Kτ P n σn i σn+1 i + P n hnσn i with site-dependent effective magnetic fields hn which encode the change in the number of flippable spins in time slices n. Denoting by Nn f the total number of flippable spins in time slice n and by ∆Nn f the change in this number due to flipping the spin σn i , the effective magnetic field reads (B.3) hn = KzV ∆Nn f Remember that the potential term ∝V in the Hamiltonian, eq. (1.1), counts the number of flippable spins. The chain consists of flippable spins and ends at time slices m and m′ > m where the first non-flippable spins occur. Because of the effective magnetic fields hn, the actual Wolffcluster update is not applicable (even for the 1D problem Heff). In the following, we describe an algorithm that is similar to the original Wolffcluster update in the sense that the clusters consist of parallel spins. Modifications are only due to the hn. In principle, one can ignore the effective magnetic fields hn in the construction of the Wolffcluster. After the construction of a cluster, one would then flip it not with probability one as usual, but with a probability that takes the energy change ∆Eh := KzV ∆Nf due to the effective fields hn and potential unflippable spins at the cluster ends into account. At least for small |Kτ/hn|, the resulting rejection rates would however be high. Also, the probability factor e−∆Eh may get small for big clusters even if |Kz/hn| is big and, thus, lead to a high rejection rate. Hence, it is favorable to take account of the energy changes due to the field terms ∝hn already during the construction of the clusters. The algorithm works as follows: (i) Start from a (consistent) random initial state σ0. Also, determine the number Nf of flippable spins in σ0. (ii) Choose a random flippable spin (site i, time slice n). (iii) Let σ0 := σn i . Starting from the initial site (n, i), go forward and backward along the direction of imaginary time, respectively, to build a 1D cluster of parallel spins. As long as the spin at the currently considered cluster boundary has magneti-zation σn′ i = σ0 and is flippable, add it with probability q(∆Nn′ f ) := 1 −e−2Kτ  · min  1, e−KzV ∆Nn′ f  to the cluster. In the following, let us denote the time-slices that define the boundary of the obtained cluster by m and m′ > m, such that the cluster consists of time-slices m + 1, m + 2, . . . , m′ −1. Let fm i , fm′ i ∈{0, 1} label whether the boundary spins are flippable (one) or not (zero). (iv) Accept the flip of the cluster σk i →σ′k i = −σk i ∀m<k<m′ with probability (B.4) A(σ →σ′) = min  1, Nf Nf + ∆Nf e−KzV ∆Nn f × e−2Kτσ0(σm i +σm′ i ) Y k=m,m′ h 1 −q(∆Nk f ) i−fk i σk i σ0 . 123 Why this rule guarantees detailed balance and is useful is explained below. (v) If the number of cluster updates surpasses a certain threshold ∝LN, evaluate and store observables of interest, and reset the update counter to zero. (vi) If you have accepted the transition in step (iv), update the spin configuration σ →σ′ and Nf →Nf + ∆Nf. Go to step (ii). Equation (B.4) is based on the Metropolis choice eq. (B.2) for the acceptance probability. The proposal probability for the cluster between time-slices m and m′ is given by P(σ →σ′) = 1 Nf Y m<k<m′ k̸=n q(∆Nk f ) × Y k=m,m′ h 1 −q(∆Nk f ) ifk i δ(σk i ,σ0) , where δ(σ, σ′) denotes the Kronecker delta. Correspondingly, P(σ′ →σ) = 1 Nf + ∆Nf Y m<k<m′ k̸=n q(−∆Nk f ) × Y k=m,m′ h 1 −q(∆Nk f ) ifk i δ(σk i ,−σ0) . Due to the fact that q(−∆Nk f )/q(∆Nk f ) = eKzV ∆Nk f , we obtain P(σ′ →σ) P(σ →σ′) = Nf Nf + ∆Nf e∆Eh−KzV ∆Nn f × Y k=m,m′ h 1 −q(∆Nk f ) i−fk i σk i σ0 , where ∆Eh = KzV Pm′−1 k=m+1 ∆Nk f = KzV ∆Nf. Multiplying this with π(σ′)/π(σ) = e−∆E with the total energy change ∆E = ∆Eh+2Kτσ0(σm i +σm′ i ) yields eq. (B.4). In the formula eq. (B.4) for the acceptance probability, one has only the factor e−Kzv∆Nn f instead of e−∆Eh = e−KzV Pm′−1 k=m+1 ∆Nk f . So, the effective magnetic fields hn are taken into account during the cluster construction, and may reduce the cluster size, but they do not occur in the cluster flip acceptance formula and can hence not increase the rejection rate. Modifications for the V0 −V3 model: It is simple to adapt the algorithm above for the Hamiltonian of the V0 −V3 model, eq. (2.3). The Hamiltonian of the equiva-lent QIM, eq. (2.4), differ only by a new potential term which counts the number of 0−plaquettes, ρ0. Due to this, only the parts of the deduction above that depend on the potential energy must be altered. The new effective magnetic field is written as (B.5) hn = KzV3∆ρn 3 + KzV0ρn 0, where ∆ρn j is the change in the number of j−plaquettes due to flipping the spin σn i . We recall that ∆ρn 3 ≡∆Nn f in the CIM notation, and we changed it to the dimer notation in eq. (B.5) only for uniformity’s sake. The probability of adding a new spin to the cluster is now q(∆ρn′ 3 , ∆ρn′ 0 ) := 1 −e−2Kτ  · min  1, e−Kz(V3∆ρn′ 3 +V0∆ρn′ 0 ) . 124 APPENDIX B. MONTE CARLO SAMPLING AND 1D CLUSTER UPDATES The proposal probability of a cluster between time slices m and m′ can be simply rewritten with the new probability q(∆ρk 3, ∆ρk 0), P(σ →σ′) = 1 Nf Y m<k<m′ k̸=n q(∆ρk 3, ∆ρk 0) × Y k=m,m′ h 1 −q(∆ρk 3, ∆ρk 0) ifk i δ(σk i ,σ0) , and we have P(σ′ →σ) P(σ →σ′) = Nf Nf + ∆Nf e∆Eh−Kz(V3∆ρn 3 +V0∆ρn 0 ) × Y k=m,m′ h 1 −q(∆ρk 3, ∆ρk 0 i−fk i σk i σ0 , where ∆Eh is now equal to ∆Eh = Kz Pm′−1 k=m+1(V3∆ρk 3 + V0∆ρk 0) = Kz(V3∆ρ3 + V0∆ρ0). Finally, the new acceptance rate is (B.6) A(σ →σ′) = min  1, Nf Nf + ∆Nf e−Kz(V3∆ρn 3 +V0∆ρn 0 ) × e−2Kτσ0(σm i +σm′ i ) Y k=m,m′ h 1 −q(∆ρk 3, ∆ρk 0) i−fk i σk i σ0 . Appendix C Energy and gap evaluation Energy: The quantum Hamiltonian ˆ H ≡ˆ HQIM, eq. (1.2), is not diagonal in the {ˆ σz i }-eigenbasis and its expectation value can hence not be evaluated directly along the lines of eq. (1.8). Based on the relation eq. (1.7) between the quantum and classical partition functions, an efficient way to evaluate the energy is to use that ⟨ˆ H⟩QIM = 1 ZQIM Tr  ˆ He−β ˆ H = −1 ZQIM ∂βZQIM = −1 N ∂∆βZCIM ZCIM + ∂∆βA A  + O(∆β2) Using the relations eq. (1.9) between the parameters of the quantum dimer model and the classical Ising model, as well as A = ALN = [sinh(2∆βt)/2]LN/2, (eq. (1.7b)), one obtains ⟨ˆ H⟩QIM = 1 N ⟨∂∆βECIM(σ)⟩CIM −L A∂∆βA + O(∆β2) = 1 N X n DHz(σn) + X i t σn i σn+1 i sinh(2∆βt) E CIM −Lt coth(2∆βt) + O(∆β2). So what one basically needs to evaluate are averages of the number of flippable spins (Hz(σn)) and the nearest-neighbor correlators σn i σn+1 i in the imaginary-time direction. Gap: It is possible to estimate the energy gap to excited states by evaluating imaginary-time correlation functions ⟨ˆ A(0) ˆ A†(iτ)⟩= 1 Z Tr  ˆ Ae−τ ˆ H ˆ A†e−(β−τ) ˆ H . If τ and β−τ are both big enough in comparison to the gap to the second excited state, one can expect the correlation functions to have a cosh form. For a generic operator ˆ A = P ij aij |i⟩⟨j|, with the eigenstates |i⟩(i ∈N0) of the system ordered according to 125 126 APPENDIX C. ENERGY AND GAP EVALUATION 1e-06 1e-05 0.0001 0.001 0.01 0.1 1 0 3.2 6.4 9.6 12.8 16 19.2 〈 iz(0) iz(i )〉QIM V/t -0.5 0 0.5 Exp. ts Figure C.1: Determination of upper bounds on the energy gap by exponential fits of the correlator ⟨ˆ σz i (0)ˆ σz i (iτ)⟩QIM for a system of L = 36 × 36 sites, β = 19.2, and ∆β = 0.02. increasing energies Ei and gaps denoted by ∆Ej,i := Ej −Ei, one gets ⟨ˆ A(0) ˆ A†(iτ)⟩ = 1 2Z X ij |aij|2(e−τEje−(β−τ)Ei + e−τEie−(β−τ)Ej) = 1 2Z X ij |aij|2e−βEi(e−τ∆Ej,i + e−(β−τ)∆Ej,i) = 1 Z X ij |aij|2e−β(Ej+Ei)/2 cosh((β/2 −τ)∆Ej,i), i.e., a sum of cosh terms with non-negative coefficients that decay exponentially in β and Ej + Ei (due to the normalization factor 1/Z rather in Ej + Ei −2E0 = ∆Ej,0 + ∆Ei,0). The “saturation” value ⟨ˆ A(0) ˆ A†(β/2)⟩= 1 Z P ij |aij|2e−β(Ej+Ei)/2 of the correlator (τ = β/2) has for low temperatures β∆E1,0 ≫1 the value |⟨ˆ A⟩gs|2. As exemplified in fig. C.1, one can hence extract the gap of the system by fitting a few leading terms of the sum to the imaginary-time correlation functions, with the simplest expression being a + b · cosh((β/2 −τ)∆E1,0). To this purpose we chose the correlator ⟨ˆ σz i (0)ˆ σz i (iτ)⟩QIM = ⟨σn i σn+τ/∆β i ⟩CIM. Appendix D Dimer sum rules Dimer coverings on regular lattices are constrained by simple sum rules, associated to Euler-Poincare and Gauss-Bonnet relations for tilings on compact surfaces . For a given dimer covering, we call Nd the total number of dimers, nj the number of plaquettes covered with j dimers, and F and V the total number of plaquettes (faces) and vertices. Clearly, the hard core dimer covering condition implies that V = 2Nd. Since each dimer is along an edge shared by two faces, we get jmax X j=0 jnj = 2Nd = V (D.1) jmax X j=0 nj = F (D.2) These two relations are valid for dimer coverings on any tiling, ordered or not. For a regular tiling on a compact surface (sphere or any n-fold torus), whose types of plaquette and/or coordination numbers follow simple rules, it is in addition possible to relate V and F, and derive simple sum rules for the nj. On a torus, one has V −E + F = 0, with E the number of edges. If the tiling vertices have constant coordinence c, we find F = V (c/2 −1). For the hexagonal lattice, we have c = 3, jmax = 3, and V = 2F, leading to n3 = n1 + 2n0. Notice that hexagonal 2-plaquettes do not enter the relation, and are called ”free charges”, and that, on average, plaquettes carry two dimers. For the square lattice, c = 4, jmax = 2, and V = F, leading to the announced n2 = n0, the 1- plaquettes being the free charges in that case. Notice finally that tilings with fixed boundaries can also be analyzed along the same line, but at the price of entering additional boundary terms. 127 128 APPENDIX D. DIMER SUM RULES Appendix E Perturbative star and plaquette phases The ideal star state is a product state with 3-plaquettes on two of the three triangular sublattices (say A and B) and 0-plaquettes on C. (E.1) |ψstar⟩dimer = O i∈A | i⟩ O j∈B | j⟩ The corresponding state in the Ising-spin representation is (E.2) |ψstar⟩spin = O i∈A |↑⟩i O j∈B |↑⟩j O k∈C |↓⟩k , keeping in mind that the state with all spins flipped corresponds to the same dimer state. |ψstar⟩is the ground state for V/t →−∞, where the potential energy term se-lects the classical dimer coverings with the maximum number of 3-plaquettes. For a perturbative analysis in λ := t/V we write the Hamiltonian eq. (1.1) in the form (E.3) ˆ HQDM = V −λ X i ˆ fi + ˆ N3  , where ˆ fi = (| i⟩⟨ i| + h.c.) flips plaquette i, and ˆ N3 = P i (| i⟩⟨ i| + | i⟩⟨ i|) counts the total number of flippable plaquettes. We denote the energy of the ith unperturbed eigenstate by E(0) i and |ψ0⟩:= |ψstar⟩. For λ = 0, the first excited states |ψ1,i⟩:= ˆ fi |ψ0⟩are obtained by flipping single plaquettes. The other degenerate |ψ0⟩can be disregarded for the following as they can only be reached by an extensive number of flips. Up to the second order, the perturbed energy is (E.4) E(2) star V = E(0) 0 V + λ2 X i | ⟨ψ1,i| ˆ fi |ψ0⟩|2 E(0) 0 /V −E(0) 1 /V + O(λ3), 129 130 APPENDIX E. PERTURBATIVE STAR AND PLAQUETTE PHASES since the linear term ⟨ψ0| ˆ fi |ψ0⟩is zero. Applying eq. (E.3), we find E(2) star = 2L 3 V + λ2V 2L 3 1 2L 3 − 2L 3 −3  (λ=t/V ) = L · 2V 3 + 2t2 9V  . (E.5) Let us now find an upper bound on the ground-state energy in the plaquette phase. The ideal plaquette state is a simple tensor product state with resonating plaquettes on one of the three sublattices. (E.6) |ψplaq⟩dimer = O i∈A (| i⟩+ | i⟩) / √ 2, The corresponding state in the Ising-spin representation is (E.7) |ψplaq⟩spin = O i∈A |→⟩i O j∈B |↑⟩j O k∈C |↓⟩k , where |→⟩i denotes the ˆ σx i -eigenstate (|↑⟩i + |↓⟩i) / √ 2. Recall that |ψplaq⟩is not an exact ground state for any value of V/t. Its energy expectation value yields hence an upper bound to ground sate energy. The contribution of the kinetic terms is due to the resonating 3-plaquettes (density 1/3) and has the value −tL/3. The contribution of the potential terms is due to the L/3 flippable plaquettes of sublattice A, while sublattices B and C contribute with a 3-plaquette density of 1/8 each. This leads us to Eplaq = −L 3 t + L 3 + 2L 3 1 8  V =L  −1 3t + 5 12V  . (E.8) For V = 0 and t = 1, this gives an energy per plaquette equal( −1/3 ), slightly above the one found numerically ( ∼−0.38). Improving this variational energy is possible along several ways. The simplest one consists in allowing for some flips in one of the two other sub-lattices (say B), in a way that keeps the possibility expressing the state as a tensor product over separated small regions. Decompose the L/3 sites in A into L/9 disjoint equilateral triangles, each containing one B site, and write down the locally excited where the three A sites of a given triangle get frozen, while the B site get flipped. One gets easily the following energy (E.9) E(1) Plaq = L 72  −12t + 21V − p 152t2 −216tV + 81V 2  , where the superscript (1) reminds that it is built by considering (and tensoring) local clusters containing one additional flip excitation on the B sublattice. This variational new state presents indeed a lower energy (one finds an energy per plaquette E ∼ −0.3379 at V = 0), but is maybe too na¨ ıve, with only a quite small improvement, and the additional weak point that it breaks the overall state symmetry into a smaller 131 subgroup, compared to that believed to be shared by the real and ideal plaquette state. One would clearly need better variational states for the plaquette phase, which would show better agreement with the numerically obtained energy, but also with the varying dimer observables throughout the plaquette phase. 132 APPENDIX E. PERTURBATIVE STAR AND PLAQUETTE PHASES Appendix F Perturbation analysis of the V0 −V3 model near the RK point Classical transfer matrix and free fermions: To treat the classical dimer model on the hexagonal lattice using a transfer matrix it is convenient to consider the “brick wall” version of the lattice, as in Fig. F.1a. First, note that it is enough to consider the dimer occupations of the vertical bonds – the information on all the other bonds can be obtained using the hard-core constraints. The transfer matrix T then relates a dimer configuration |ψ⟩on one row y to the configuration on the row y + 1 above. More precisely, T |ψ⟩is the linear superposition of all the configurations of y + 1 which are compatible with |ψ⟩at level y. The next step is to consider a single row of vertical bonds, and to associate a (spinless) fermion Fock space to it: The bonds not occupied by a dimer carry the fermions, and the bonds with a dimer carry holes. The y com-ponent of the flux density is simply related to density of vertical dimers, which is, in turn, simply related to the fermion density n (which is the same for each row): (F.1) f = 2 −3n. The idea is simply to use the Pauli principle to enforce the dimer hard-core constraint. If we note c† x the fermion creation operator on site x (see the numbering in Fig. F.1a), the transfer matrix can be shown to obey: T c† x =  c† x + c† x+1  T (F.2) T |vacuum⟩ = |vacuum⟩ (F.3) In other words, a fermion on site x should propagate to x or x + 1 in the line above. The simplest solution to these relations has the following expression: T = Y k∈[−π..π[  1 + eikc† kck  . (F.4) where c† k is the Fourier transform of cx. From this one can also find the commutation relation with annihilation operators: cxT = T (cx + cx−1) . (F.5) 133 134APPENDIX F. PERTURBATION OF THE V0 −V3 MODEL NEAR THE RK POINT 0 1 2 1 0 2 3 x 2 3 0 1 y 2 3 n (a) 0 1 2 1 0 2 x 2 3 0 1 y 2 0 n (b) Figure F.1: Brickwall representation of the hexagonal lattice. The shaded rectangle corresponds to a 3−plaquette in (a), and to a 0−plaquette in (b). The fermions (blue circles) live on the vertical bonds which are not occupier by a dimer (magenta). The transfer matrix propagate the configurations in the y direction, from one row to the line above. Note the numbering of the “sites” (vertical bonds): a fermion on site x may go to x or x + 1 in the line above. To enforce the presence of three dimers around the shaded plaquette we need: i) row y = 0: one fermion on bond 1; ii) row y = 1: one fermion on bond 1 and one hole on bond 2 (or vice versa); iii) row y = 2: one fermion on bond 2. To enforce zero dimers around it, we need: i) row y = 0: a hole on bond 1 ii) row y = 1: two fermions on the bonds 1 and 2, and iii) row y = 2: a hole on bond 2. In the following it will also be necessary to commute cx (and c† x) and T in the reversed direction compared to Eqs. F.2 and F.5. The results are now infinite sums: c† xT = T  c† x −c† x+1 + c† x+2 −c† x+3 + · · ·  (F.6) T cx = (cx −cx−1 + cx−2 −cx−3 + · · · ) T . (F.7) When the y dimension of the lattice goes to infinity, only the eigenvector of T with the largest eigenvalue in the given flux sector needs to be kept. The later is nothing but a Fermi sea |f⟩with Fermi momentum kF and density n = kF /π. The corresponding eigenvalue, Λ(f) = Q −kF <k≤kF (1 + eik), allows to compute the entropy per site, but its explicit expression is not needed here. Density of 3-plaquettes: We start by the computation of ρ3(f), the density of 3-plaquettes. Such an hexagon is shaded in Fig. F.1a, and it is characterized by one fermion in x = 1 on the lowest row (→c† 1c1), one fermion in x = 1 and one hole in x = 2 in the second row (→c† 1c1c2c† 2), and, finally, a fermion at x = 2 in the third row (→c† 2c2). The density ρ3 is thus (F.8) ρ3 = 2⟨f|c† 2c2T c† 1c1c2c† 2T c† 1c1|f⟩ ⟨f|T 2|f⟩ (the factor 2 comes from the fact that there are two ways to put three dimers around an hexagon). The next step amounts to eliminate T by using the relations Eqs. F.2,F.5,F.6 and F.7. The result is (F.9) ρ3 = 2⟨f|D† 2(c2 + c1)c1c† 1c† 2c2(c† 1 + c† 2)S1|f⟩, 135 where we have defined: D† r = ∞ X x=0 (−1)xc† x+r (F.10) Sr = 0 X x=−∞ (−1)xcx+r. (F.11) The correlator of Eq. F.9 can be obtained, using Wick’s theorem, as the determinant of an 4 × 4 matrix M: M3 =      ⟨D† 2(c2 + c1)⟩ ⟨D† 2c1⟩ ⟨D† 2c2⟩ ⟨D† 2S1⟩ −⟨(c2 + c1)c† 1⟩ −⟨c1c† 1⟩ ⟨c† 1c2⟩ ⟨S1c† 1⟩ −⟨(c2 + c1)c† 2⟩ −⟨c1c† 2⟩ ⟨c† 2c2⟩ ⟨c† 2S1⟩ −⟨(c2 + c1)(c† 1 + c† 2)⟩ −⟨c1(c† 1 + c† 2)⟩ −⟨c2(c† 1 + c† 2)⟩ ⟨(c† 1 + c† 2)S1⟩      (F.12) . The two-point functions appearing above can expressed using the correlator of the Fermi sea: Gx−y = ⟨c† xcy⟩= sin(nπ(x−y)) π(x−y) for x ̸= y, and ⟨c† xcx⟩= n. The correlations ⟨D† xcy⟩or ⟨cxS† y⟩contain some infinite sums which can be calculated using the sum rule: P∞ r=0(−1)rGr = n/2. The one appearing in M3 are ⟨D† 2c1⟩= ⟨D† 2c2⟩= ⟨c† 2S1⟩= ⟨c† 1S1⟩= n/2. The last one, ⟨D† 2S1⟩, contains two sum which can also be performed exactly, leading to ⟨D† 2S1⟩= sin(nπ) 2π(1+cos(nπ)). The matrix M3 therefore takes the explicit form: M3 =      n n/2 n/2 sin(nπ) 2π(1+cos(nπ)) A n −1 G1 n/2 A G1 n n/2 2A A A n      (F.13) where G1 = sin(πn) π and we have set A = G1 −1 + n. ρ3 is finally obtained from the determinant of M3: ρ3 =2 det(M3) = (2 + cos (nπ)) n2 −2n + 1  sin (nπ) π (cos (nπ) + 1) −n2 (n −1) + sin (nπ) (cos (nπ) −1) π3 (F.14) Density of 0-plaquettes: The density of n0 hexagon can be obtained in a similar way. The starting point is the following correlator (see Fig. F.1b): (F.15) ρ0 = ⟨f|c2c† 2T c† 1c1c† 2c2T c1c† 1|f⟩ ⟨f|T 2|f⟩ . After commutating one T to the right and the other to the left we get: (F.16) ρ0 = 2⟨f|(c2 + c1)D† 2c† 1c1c† 2c2S1(c† 1 + c† 2)|f⟩. 136APPENDIX F. PERTURBATION OF THE V0 −V3 MODEL NEAR THE RK POINT As for ρ3, we construct a matrix from the two-point contractions and the result is: M0 =      n −1 n/2 n/2 sin(nπ) 2π(cos(nπ)+1) A n G1 n/2 A G1 n n/2 2A A A n −1     . (F.17) Finally ρ0 is obtained by taking the determinant: ρ0 = det(M0) =cos (nπ) (cos (nπ) + 1) −2 π2 −n sin (nπ) cos (nπ) (n −2) + 2n −3 π (cos (nπ) + 1) −1 π3 sin (nπ) (cos (nπ) −1) + n2 (n −1) . (F.18) Density of 1-plaquettes and 2-plaquettes: In a similar fashion, we can obtain the formulas for ρ1 and ρ2, ρ1(n) =3 −2 cos(nπ) −cos(2nπ) π2 + 3 sin(2nπ) −6 sin(nπ) 2π3 (F.19) + n(3n −4) sin(nπ) π + sin(nπ)(n −1)(3n −1) π(cos(nπ) + 1) −3n2(n −1) ρ2(n) =2 cos(nπ) + cos(2nπ) −3 2π2 + 6 sin(nπ) −3 sin(2nπ) 2π3 (F.20) + sin(nπ)(3n −2)(2n + n −1 + cos(nπ)) π(cos(nπ) + 1) To be sure of their correctness, we tested if the sum rules P3 j=0⟨ˆ ρj⟩= 1 and ⟨ˆ ρ3⟩−⟨ˆ ρ1⟩−2⟨ˆ ρ0⟩= 0 are obeyed by the perturbative formulas. Bibliography P.W. Anderson. Resonating valence bonds: A new kind of insulator? Mater. Res. Bull., 8(2):153 – 160, 1973. Daniel S. Rokhsar and S. A. Kivelson. Superconductivity and the quantum hard-core dimer gas. Phys. Rev. Lett., 61(20):2376–2379, November 1988. S. Sachdev and M. Vojta. Translational symmetry breaking in two-dimensional antiferromagnets and superconductors. J. Phys. Soc. Jpn., 69:Suppl. B 1, 2000. R. Moessner and S. L. Sondhi. Resonating valence bond phase in the triangular lattice quantum dimer model. Phys. Rev. Lett., 86:1881–1884, Feb 2001. R. Moessner and S. L. Sondhi. Ising models of quantum frustration. Phys. Rev. B, 63(22):224401, May 2001. G. Misguich, D. Serban, and V. Pasquier. Quantum dimer model on the kagome lattice: Solvable dimer-liquid and ising gauge theory. Phys. Rev. Lett., 89:137202, Sep 2002. Roderich Moessner and Kumar S Raman. Quantum dimer models. In Claudine Lacroix, Philippe Mendels, and Fr´ ed´ eric Mila, editors, Introduction to Frustrated Magnetism, volume 164 of Springer Series in Solid-State Sciences, pages 437–479. Springer, Heidelberg, 2011. A. F. Albuquerque, D. Schwandt, B. Het´ enyi, S. Capponi, M. Mambrini, and A. M. L¨ auchli. Phase diagram of a frustrated quantum antiferromagnet on the honeycomb lattice: Magnetic order versus valence-bond crystal formation. Phys. Rev. B, 84:024406, Jul 2011. Arnab Sen, Kedar Damle, and T. Senthil. Superfluid insulator transitions of hard-core bosons on the checkerboard lattice. Phys. Rev. B, 76:235107, Dec 2007. R. Moessner and S. L. Sondhi. Resonating valence bond phase in the triangular lattice quantum dimer model. Phys. Rev. Lett., 86(9):1881–1884, February 2001. Arnaud Ralko, Michel Ferrero, Federico Becca, Dmitri Ivanov, and Fr´ ed´ eric Mila. Zero-temperature properties of the quantum dimer model on the triangular lattice. Phys. Rev. B, 71(22):224109, June 2005. 137 138 BIBLIOGRAPHY Gr´ egoire Misguich and Fr´ ed´ eric Mila. Quantum dimer model on the triangular lattice: Semiclassical and variational approaches to vison dispersion and conden-sation. Phys. Rev. B, 77(13):134421, April 2008. R. Moessner, S. L. Sondhi, and P. Chandra. Phase diagram of the hexagonal lattice quantum dimer model. Phys. Rev. B, 64(14):144416, 2001. Masuo Suzuki, Seiji Miyashita, and Akira Kuroda. Monte carlo simulation of quantum spin systems. i. Prog. Theor. Phys., 58(5):1377–1387, 1977. T. M. Schlittler, R. Mosseri, and T. Barthel. Phase diagram of the hexagonal lattice quantum dimer model: order parameters, ground-state energy, and gaps. arXiv:1501.02242, January 2015. Eduardo Fradkin, David A. Huse, R. Moessner, V. Oganesyan, and S. L. Sondhi. Bipartite Rokhsar–Kivelson points and Cantor deconfinement. Phys. Rev. B, 69(22):224415, June 2004. P.A. Mac Mahon. Combinatory Analysis. Cambridge University Press, 1916. David M Bressoud. Proofs and confirmations: the story of the alternating sign matrix conjecture. Cambridge University Press, Cambridge; New York, 1999. V Elser. Solution of the dimer problem on a hexagonal lattice with boundary. J. Phys. A: Math. Gen., 17(7):1509–1513, May 1984. R. Mosseri, F. Bailly, and C. Sire. Configurational entropy in random tiling models. Journal of Non-Crystalline Solids, 153–154:201–204, 1993. N. Destainville, R. Mosseri, and F. Bailly. Configurational entropy of codimension-one tilings and directed membranes. Journal of Statistical Physics, 87(3–4):697– 754, 1997. Jason Olejarz, P. L. Krapivsky, S. Redner, and K. Mallick. Growth Inside a Corner: The Limiting Interface Shape. Phys. Rev. Lett., 108(1):016102, January 2012. Rapha¨ el Cerf and Richard Kenyon. The Low-Temperature Expansion of the Wulff Crystal in the 3d Ising Model. Commun. Math. Phys., 222(1):147–179, August 2001. H. Cohn, M. Larsen, and J. Propp. The Shape of a Typical Boxed Plane Partition. New York J. Math., 4:137–165, 1998. N. Destainville. Entropy and boundary conditions in random rhombus tilings. J. Phys. A: Math. Gen., 31(29):6123, July 1998. R. Moessner, S. L. Sondhi, and P. Chandra. Two-dimensional periodic frustrated ising models in a transverse field. Phys. Rev. Lett., 84(19):4457–4460, May 2000. BIBLIOGRAPHY 139 H. W. J. Bl¨ ote and H. J. Hilhorst. Roughening transitions and the zero-temperature triangular ising antiferromagnet. J. Phys. A: Math. Gen., 15(11):L631, November 1982. Peter Orland. Exact solution of a quantum model of resonating valence bonds on the hexagonal lattice. Phys. Rev. B, 47(17):11280–11290, May 1993. David J Thouless. Topological Quantum Numbers in Nonrelativistic Physics. WORLD SCIENTIFIC, March 1998. H. F. Trotter. On the product of semi-groups of operators. Proc. Am. Math. Soc, 10:545–551, 1959. Masuo Suzuki. Relationship between d-dimensional quantal spin systems and (d + 1)-dimensional ising systems. Prog. Theor. Phys., 56(5):1454–1469, 1976. Marc Daniel Schulz, S´ ebastien Dusuel, Gr´ egoire Misguich, Kai Phillip Schmidt, and Julien Vidal. Ising anyons with a string tension. Phys. Rev. B, 89:201103, May 2014. Peter Orland. Exact solution of a quantum gauge magnet in 2 + 1 dimensions. Nucl. Phys. B, 372(3):635 – 653, 1992. Peter Orland. Rokhsar-kivelson model of quantum dimers as a gas of free fermionic strings. Phys. Rev. B, 49:3423–3431, Feb 1994. Eduardo Fradkin, David A. Huse, R. Moessner, V. Oganesyan, and S. L. Sondhi. Bipartite rokhsar˘kivelson points and cantor deconfinement. Phys. Rev. B, 69:224415, Jun 2004. Sylvain Capponi, David Schwandt, Sergei Isakov, Roderich Moessner, and An-dreas L¨ auchli. Emergent U(1) Symmetry in Square Lattice Quantum Dimer Mod-els. In APS Meeting Abstracts, page 26006, February 2012. D. Banerjee, M. B¨ ogli, C. P. Hofmann, F.-J. Jiang, P. Widmer, and U.-J. Wiese. Interfaces, strings, and a soft mode in the square lattice quantum dimer model. arXiv:1406.2077, June 2014. David Schwandt. Valence bond approach to the low-energy physics of antiferro-magnets. phdthesis, Universit´ e Paul Sabatier - Toulouse III, July 2011. G. Misguich. Private communications. C L Henley. From classical to quantum dynamics at Rokhsar–Kivelson points. J. Phys.: Condens. Matter, 16(11):S891–S898, March 2004. R. Byrd, P. Lu, J. Nocedal, and C. Zhu. A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Comput., 16(5):1190–1208, 1995. T. M. Schlittler, T. Barthel, G. Misguich, J. Vidal, and R. Mosseri. Devil’s staircase in a quantum dimer model on the hexagonal lattice. (In finalization). 140 BIBLIOGRAPHY M. D. Schulz, S. Dusuel, G. Misguich, K. P. Schmidt, and J. Vidal. Ising anyons with a string tension. Physical Review B, 89(20), May 2014. arXiv:1401.1033 [cond-mat, physics:hep-lat, physics:hep-th, physics:quant-ph]. Ashvin Vishwanath, L. Balents, and T. Senthil. Quantum criticality and de-confinement in phase transitions between valence bond solids. Phys. Rev. B, 69(22):224416, June 2004. Stefanos Papanikolaou, Kumar S. Raman, and Eduardo Fradkin. Devil’s stair-cases, quantum dimer models, and stripe formation in strong coupling models of quantum frustration. Phys. Rev. B, 75(9):094406, March 2007. Adrian Feiguin, Simon Trebst, Andreas W. W. Ludwig, Matthias Troyer, Alexei Kitaev, Zhenghan Wang, and Michael H. Freedman. Interacting Anyons in Topo-logical Quantum Liquids: The Golden Chain. Phys. Rev. Lett., 98(16):160409, April 2007. Simon Trebst, Matthias Troyer, Zhenghan Wang, and Andreas W. W. Ludwig. A Short Introduction to Fibonacci Anyon Models. Prog. Theor. Phys. Supplement, 176:384–407, June 2008. R. Youngblood, J. Axe, and B. McCoy. Correlations in ice-rule ferroelectrics. Phys. Rev. B, 21(11):5212–5220, 1980. B. Nienhuis, H. J. Hilhorst, and H. W. J. Blote. Triangular SOS models and cubic-crystal shapes. J. Phys. A: Math. Gen., 17(18):3559, December 1984. L. S. Levitov. Equivalence of the dimer resonating-valence-bond problem to the quantum roughening problem. Phys. Rev. Lett., 64(1):92–94, January 1990. Christopher L. Henley. Relaxation time for a dimer covering with height repre-sentation. J Stat Phys, 89(3-4):483–507, November 1997. Jean-Marie St´ ephan, Shunsuke Furukawa, Gr´ egoire Misguich, and Vincent Pasquier. Shannon and entanglement entropies of one- and two-dimensional crit-ical wave functions. Phys. Rev. B, 80(18):184421, November 2009. M. Peyrard and S. Aubry. Critical behaviour at the transition by breaking of analyticity in the discrete frenkel-kontorova model. J. Phys. C: Solid State Phys., 16(9):1593, March 1983. N. Destainville. Entropie configurationnelle des pavages al´ eatoires et des mem-branes dirig´ ees. PhD thesis, Universit´ e de Paris 6, 1997. R Dijkgraaf, D Orlando, and S Reffert. Quantum crystals and spin chains. Nuclear Physics B, 811(3):463–490, April 2009. James Propp. Boundary-Dependent Local Behavior for 2-D Dimer Models. Int. J. Mod. Phys. B, 11(01n02):183–187, January 1997. BIBLIOGRAPHY 141 Richard Kenyon. Lectures on Dimers. In Statistical mechanics, volume 16 of IAS/Park City Mathematics Series, pages 191–230. Amer. Math. Soc., 2009. Leticia F. Cugliandolo, Giuseppe Gonnella, and Alessandro Pelizzola. Six ver-tex model with domain-wall boundary conditions in the Bethe-Peierls approxi-mation. arXiv:1501.00883 [cond-mat, physics:math-ph], December 2014. arXiv: 1501.00883. H. Bethe. Zur Theorie der Metalle. Z. Physik, 71(3-4):205–226, March 1931. E. Ehrhart. Nombre de points entiers d’un t´ etraedre. Comptes rendus de l’Acad´ emie des Sciences Paris, 258(I):3945–3948, 1964. T. M. Schlittler, P. Ribeiro, and R. Mosseri. Two dimensional partitions : from an amoebae to an arctic circle. (In preparation). Joakim Linde, Cristopher Moore, and Mats G. Nordahl. An n -Dimensional Gen-eralization of the Rhombus Tiling. DMTCS Proceedings, 0(1), January 2006. M. Widom, R. Mosseri, N. Destainville, and F. Bailly. Arctic Octahedron in Three-Dimensional Rhombus Tilings and Related Integer Solid Partitions. Journal of Statistical Physics, 109(5-6):945–965, December 2002. Emanuel Gull, Andrew J. Millis, Alexander I. Lichtenstein, Alexey N. Rubtsov, Matthias Troyer, and Philipp Werner. Continuous-time Monte˜Carlo methods for quantum impurity models. Rev. Mod. Phys., 83(2):349–404, May 2011. Robert H. Swendsen and Jian-Sheng Wang. Nonuniversal critical dynamics in monte carlo simulations. Phys. Rev. Lett., 58(2):86–88, Jan 1987. Ulli Wolff. Collective Monte Carlo updating for spin systems. Phys. Rev. Lett., 62(4):361–364, Jan 1989. J.F Sadoc and R. Mosseri. Geometrical Frustration. Cambridge University Press, Cambridge, 2007. 142 BIBLIOGRAPHY Remerciements Je voulais remercier tout d’abord Pascal Viot, directeur du Laboratoire de Physique Th´ eorique de la Mati ere Condens´ ee, pour avoir m’accueillit dans le laboratoire et pour ses conseils. Je remercie Benjamin Canals et Sylvain Capponi pour avoir accept´ e d’ˆ etre les rapporteurs de cette these, ainsi que Leticia F. Cugliandolo, Kirone Mallick et Fr´ ed´ erick Mila pour ˆ etre les examinateurs de mon jury. Je suis reconnaissant de R´ emy Mosseri, toujours avec la porte de son bureau ou-verte, pou, son guidage, ses conseils (et sa patience), et pour tout ce que il m’ai appris pendant ces quatre ans de stage et th ese. Je voulais remercier aussi Julien Vidal, Thomas Bartel, Gr´ egoire Misguish et Pedro Ribeiro, pour toutes les discussions et critiques constructives, ainsi que Jean-N¨ oel Fuchs, qui a ´ et´ e mon tuteur pendant cette these. Je remercie aussi : • les secr´ etaires du LPTMC, Diane Domant, Liliane Cruzel et Sylvie Dalla Foglia, toujours souriantes, mˆ eme quand on arrive avec des probl emes administratifs parfois effrayants. • Marc Schulz, Fabien Closa, Fr´ ederic Leonard et F´ elix Rose, avec qui j’ai partag´ e le bureau pendant cette these, ainsi que Jean-Fran¸ cois, Thibaut C., Pierre, Marie, Cl´ ement, Pascal, Axel, Lucas, Jules, Tom, Julien-Piera, Axelle, Oscar, Thibaut D., Andreas, Elena, Simon, Charlotte, Nicolas, Boris, Elsa, Chlo´ e, et tous les autres chercheurs du labo avec qui j’ai eu contact, pour l’ambiance et les le¸ cons sur la subtilit´ e du humour fran¸ cais. • la “famille” br´ esilienne en France, Gabriel, Sofia, Thiago S., Julio, Karina, Julian, Gustavo, Mayra, David, Thiago C., Luciana, Guilherme, Thiago F. (oui, Thiago est un nom populaire au Br´ esil). • les professeurs de l’USP, Tito Bonagamba et Djalma Redondo, pour ce que j’ai appris avec eux et leurs conseils. • la Fondation CFM pour le financement accord´ e pendant les derniers trois mois de th ese. Finalement, mais non moins important, je remercie mes parents, Antonio et Luisa, ma tante, Lucia, mes sœurs, Marga, Cris et Toninha, et toute ma famille pour leur soutien. 143 Thiago MILANETTO SCHLITTLER 15 Juin 2015 Sujet : ´ Etude de modeles de dim eres et partitions quantiques sur r´ eseaux hexagonaux R´ esum´ e : Les modeles de dim eres quantiques (QDM’s) ont une s´ erie de comportements int´ eressants, comme de l’ordre topologique et des phases de liquides de spin. Dans cette these, nous explorons ces mod eles pour un r´ eseaux hexagonal, ainsi que leur ´ equivalence aux problemes de partitions, un sujet qui fait partie du domaine de la combinatoire. Premi erement, nous ´ etudions le modele RK, pour lequel la question sur la pr´ esence d’une phase avec un gap non-nul restait encore ouverte. Nous d´ ecrivons un algorithme Monte-Carlo qui nous permet, entre autres r´ esultats, d’acc´ eder directement au gap du syst eme. Deuxiemement, nous proposons une g´ en´ eralisation de ce mod ele. Nous trouvons un diagramme de phase beacoup plus complexe, avec des transitions de phase entre differents secteurs topologiques, et compatible avec le d´ econfinement de Cantor. Troisemement, nous ´ etudion l’application du modele RK a des r´ eseaux hexagonales associ´ es a des probl emes de partitions planaires. Cela impose des nouvelles conditions de bord, et nous trouvons un nouveau comportement du modele. Nous proposons aussi une m´ ethode que utilise les propri´ et´ es de l’espace de configurations des probl emes de partitions pour r´ eduire la compl´ exit´ e du QDM. Finalement, nous mod´ elisons les problemes de croissance et ´ efondrement de coin de cristaux clas-siques dans le cadre des probl emes de partition, trouvant une transition souple entre des interfaces limites du type ”amibe” et le cercle arctique. Mots cl´ es : Modeles de dim eres quantiques, probl` emes de partitions, Monte-Carlo quantique, d´ econfinement de Cantor, croissance de cristaux, cercle arctique. Subject : Study of quantum dimer and partition models on honeycomb lattices R´ esum´ e : The quantum dimer models (QDM’s) have a series of interesting behaviors, such as topological order and spin liquid phases. In this thesis, we study these models for an honeycomb lattice, and also their equivalence with the partition problems, a subject of the domain of combi-natorics. Firstly, we study the RK model, for which the question on whenever one of its phases is gapped or not was still open. We describe an Monte-Carlo algorithm that allows to, among other results, access this gap directly. Secondly, we propose a generalization of this model. We find a more complex phase diagram, with phase transitions between the different topological sectors, and compatible with the Cantor deconfinement. Thirdly, we study the application of the RK model to honeycomb lattices associated to the planar partition problems. This imposes new boundary conditions, and we find a new model behavior. We also propose a m´ ethod that uses the properties of the partition problem’s configuration space to reduce the complexity of the QDM. Finally, we modelize the problems of classical crystal corner growth and melting with the formal-ism of the partition problems, finding a smooth transition between the limit interfaces of type ”amoebae” and the arctic circle. Keywords : Quantum dimer models, partition problems, quantum Monte-Carlo, Cantor decon-finement, crystal growth, arctic circle.
188863
https://www.clevelandzoosociety.org/z/2021/03/02/truth-or-tail-giraffe-have-more-neck-bones-than-a-human
Skip to footer Give Now Primate Forest Your gift will help us embark on a transformational, multi-year capital campaign. Memberships Join Now! Become a member today. Events What We Fund Learn More Who We Are Media Login Contact Home Blog Truth or Tail: Giraffe have more neck bones than a human. Truth or Tail: Giraffe have more neck bones than a human. Posted on Tuesday, March 2nd, 2021 TALL TALE! Even though the neck of a giraffe can be eight feet long and weigh up to 600 pounds, they only have seven neck vertebrae - the same number of neck bones that humans have! But unlike our vertebrae, each of theirs can be up to 10 inches long. These large vertebrae link together to form those famous long necks we all know and love. A giraffe’s long neck allows it to eat leaves high in the trees. This decreases food competition between them and other plant-eating animals. Their long necks also help them spot distant predators. Because of their unique vantage points, other grassland prey species will often look to giraffe for signs of incoming danger. Lastly, male giraffe will use their long necks to compete with other males. They do this by swinging their necks against each other in a behavior called ‘necking’. We do NOT recommend trying this at home… Share This Page Copy Family Select $199/year 2 adults, 6 children, 2 guests Purchase Family Select Membership RenewGift Admits two named adults in the same household and their children or grandchildren 18 years and under (limit six) Two free guests every visit Family Plus $169/year 2 adults, 6 children, 1 guest Purchase Family Plus Membership RenewGift Admits two named adults in the same household and their children or grandchildren 18 years and under (limit six) One free guest every visit Family $139/year 2 adults, 6 children Purchase Family Membership RenewGift Admits two named adults in the same household and their children or grandchildren 18 years and under (limit six) Individual Plus $99/year 2 adults Purchase Individual Plus Membership RenewGift Admits one named adult and one guest on every visit OR waive the guest privilege and list two named adults Senior Plus (55+) $74/year 2 adults Purchase Senior Plus (55+) Membership RenewGift Admits one named adult and one guest on every visit OR waive the guest privilege and list two named adults Explorers Club $500/year 2 adults, 6 children, 6 guests Purchase Explorers Club Membership RenewGift Admits two named adults in the same household and their children or grandchildren 18 years and under (limit six) Limited edition t-shirt (one per membership) Six free guests every visit Keepers Club $300/year 2 adults, 6 children, 4 guests Purchase Keepers Club Membership RenewGift Admits two named adults in the same household and their children or grandchildren 18 years and under (limit six) Four free guests every visit Limited edition t-shirt (one per membership)
188864
https://www.youtube.com/watch?v=Zw5t6BTQYRU
Logarithms... How? (NancyPi) NancyPi 713000 subscribers 100385 likes Description 3608571 views Posted: 19 Aug 2018 MIT grad introduces logs and shows how to evaluate them. To skip ahead: 1) For how to understand and evaluate BASIC LOGS, skip to time 0:52. 2) For how to evaluate weirder logs, including the log of 0, 1, a FRACTION, or a NEGATIVE number, skip to time 6:44. 3) For NATURAL LOGS (LN X), skip to time 11:17. 4) For even weirder logs, including SOLVING for X and using the CHANGE-OF-BASE formula, skip to time 14:56. Nancy formerly of MathBFF explains the steps. Follow Nancy on Instagram: Twitter: 1) BASIC LOGS: you can read log notation as "log, base 3, of 9 equals X". The small (subscript) number is called the base. You can always evaluate a log expression by rearranging it into something called exponential form. Every log expression is connected to an exponential expression. In this example, the log is connected to the exponential form "3 to the X power equals 9". This means, "3 raised to what power gives you 9?" Since 3 raised to the power of 2 equals 9, the answer for X is 2. This is also the answer for the value of the log expression. The log is always equal to the power (or exponent) in the exponential version, and in this case it equals 2. If you want, you can find the log value in your head just by asking yourself what power you need in order to turn the base number into the middle number ("argument" number). Note: if there is no base number in the log expression (no little subscript number), then the base is 10, since 10 is the default base. 2) WEIRDER LOGS (log of 0, 1, a negative number, or a fraction): you can use the same steps to rearrange log expressions that have a fraction, negative number, 0, or 1 in them. You can still rearrange them to be in exponential form just like you can with the basic logs from earlier. The log of 1 will always be 0, since 0 is the only power that can turn a base into 1. The log of 0 will always be undefined, since no power can turn a base into 0. The log of a negative number is undefined in the real number system, since no real power can turn a positive base into a negative number. 3) NATURAL LOGS (ln x): the natural log is just a special type of log where the base is e (the special math constant e, which is approximately 2.718 if you plug it into your calculator). You can use the same steps for rearranging the log expression into exponential form. Just remember that ln x means log, base e. 4) EVEN WEIRDER LOGS (solving for X, change-of-base formula): even if there is an X variable in the log part of an equation, you can still rearrange the equation into exponential form. This will let you solve for X. Sometimes you might need to use the change-of-base formula to evaluate a log expression. If there is no whole number power you know that works, it may actually be a decimal power that you can find by using the change-of-base formula. For example, you can re-write log, base 2, of 7 as (log 7)/(log 2) and use your calculator to find the decimal number if you need it. For more of my math videos, check out: Transcript: Hi guys, I'm Nancy, and I'm going to show you logarithms. "Logarithms" sounds a little pretentious so I'm just gonna call them "logs" and here are the kinds I'm gonna show you. So if you see something up here that looks like the kind you're trying to figure out you can use the links in the video or the description to skip ahead. So I'm gonna show you first the basic kind of logs then I'll show you some weirder kinds if you have a log of a negative number, 0, 1, or a fraction. I remember hating the fraction kind. And then I'll show you natural logs which are a special kind of log. And finally, some even weirder logs and by that I mean if you have an x in the log part of your expression or if you have to use the change-of-base formula. So let me show you the basic ones. OK, what if you have a basic log expression like this? What is that? How do you even read that expression? The way you read it is "log, base 3, of 9". But how do you evaluate it? Or what if you have to simplify that? How do you do it? There are a few ways. One way, if you're the kind of person who loves to do things in your head you can just think to yourself "3, raised to what power, gives me 9?" And if you can do it that way, more power to you, you are cooler than the rest of us. But for all the rest of us usually the easiest way the fastest way is to re-write this into exponential form. And I will show you how to do that. But the first thing you should do is if this doesn't already equal something, make it equal to x. And there's a reason for that. Because you're going to rearrange this and connect it to an exponential version. How do you do that? There's a pattern you can use. And here's the pattern. To re-write this into exponential form so that you can find the answer you start with this little base that I circled. This is the base of your log expression. Start there, and move in this direction of the arrow... and raise that base to the power of what's on the right side. So the base 3... raised to the power of the right side, which is x... equals...set it equals... 9, the middle number, or the other number that's left that you haven't used. Alright, so I've just rearranged into exponential form with that pattern. Some people think of it as "little, to the right, equals middle" if that helps you but that's the general way that you write this. That's the order And you end up with something like this that's exponential. 3 to the x power equals 9 which is great, 'cause all you have to do is figure out what x makes that true and since... since 3 times 3 equals 9, or 3 squared equals 9... then you can tell that x must be 2. And since x equals 2, that's your answer for the log. That log is equal to 2. The log is always equal to a power, from the exponential version. Alright, so your answer's 2. If it helps, you can think of this order as a snail. Sounds dumb, but for some people it makes this stick, the order. The idea is that the circle part, the head of the snail, is your base and then you move in a spiraling direction kinda the way a snail shell spirals so counter-clockwise spiraling and if you do that, you'll get the right order of... base, to the right side, equals the middle number. Doesn't really look like a snail, maybe if I added... more spirals maybe, pretty rough. So if that helps you, great but let me show you another log expression. Alright, what if you have a log expression like this log of 10,000 and there's no little base written here? I'm showing you this one, because if you don't have a little base given to you... it's going to be 10. The default is 10. It's implied, and I think it's good to go ahead and write it. So this is actually log, base 10, of 10,000. Say that you need to simplify this or evaluate this... I think the best way is still to go ahead and write "= x"... and to connect it to the exponential version of it. And that will let you solve for what the log equals. So remember, you start with the base number, the small base number here... you raise it to the power of what's on the right side, which is x... and then you set it equal to the other number the number that's left, the one in the middle, 10,000. OK, so you have an exponential version of it now. All you need to do is figure out what x makes this true... what x makes 10, raised to that power, equal to 10,000... and since 10 times 10 times 10 times 10 equals 10,000... or 10 to the 4th power equals 10,000... you can tell that that little power x has to be 4 because 10^4 equals 10,000, so x equals 4. And now because x equals 4 your whole log expression is just equal to 4, and that is the answer. Alright, here are some weirder kinds of log I want to show you. Log of a fraction, log of 1, log of 0, log of a negative number... You could see some like this. What if you have a log of a fraction? This is log, base 2, of one-eighth. How do you find that? For these, still do the same steps as before. Set it equal to x, rewrite it into exponential form... so when you re-write this, you get 2^x = 1/8 over here. All you need to do is find what x makes this true but it's not obvious when you have a fraction like 1/8 on the right side. You don't want that. You want to somehow make this into 2 to a power. There are tricks for this. First, check to see if anything in this fraction is a power of 2... 8 is 2 to the 3rd power so one trick is to re-write this as 1 over 2^3 instead of 1 over 8. And then the other thing you'll need is to re-write this. 1 over 2^3 is the same as 2^(-3). As you probably know... 2 to the negative 3 power the negative power just means 1 over 2 to the positive version of the power. So when you do that... all you have to do is compare x and this power, the negative 3 to see that x is negative 3... and since x is -3, this whole log is equal to -3... so the log, base 2, of a fraction is probably a negative number, a negative power. What about log of 1? This looks so simple, and yet it can be very confusing... confusingly simple. First of all, if there's no little power there, remember that it's 10. It's a hidden 10, and it's probably better to write it. Set it equal to x, because you don't know what it is. Just set it equal to a variable, re-write it... in exponential form, this is 10 to the x power equals 1. So what power, when you raise 10 to that power, equals 1? Well, if you didn't know... any number raised to the 0 power equals 1. So 10 to the 0 power equals 1. That's the only way that will happen. So that whole log is just equal to 0. What about log of 0? Same idea. For all of these, just try to re-write them and see what happens. And when we re-write it, we get 10^x = 0. If you think about this... there's never any number you could put in for the power of 10... that would ever give you back 0. Every power you put there... will give you a positive number greater than 0. 10 to the 0 power is 1, not 0. 10 to a negative power is not a negative number. It's some fraction like this. So there's no x that will ever make this work. So x is undefined. This log is undefined. And finally, log of a negative number... same steps, try them... and when you do try that, you have 10^x = -1 Just like in this one, you will never get negative 1 back. It's impossible. 10^0 is 1. 10 to the negative number is some fraction that's positive. This is impossible so x is undefined. There's nothing that works. The log is undefined. So these were my weird examples of logs. Fractions, negative numbers, etc... Now I wanna show you something called the natural log. Alright, now let me show you natural logs. That's what this ln means. It stands for natural log, so natural log of 1. Or ln of 1. What is a natural log? It really is not as hard as it sounds. It's just a special king of log where the base is e. So whenever you see ln... you can re-write it as log, base e. I think that's the easiest way to evaluate it. So ln 1, natural log of 1, is really just log, base e, of 1. So if you want, you can re-write it and then it'll be easier to solve for what this equals. What is e, by the way? It's just a special number in math. It's a constant, and if you put it in your calculator it'll be some decimal like 2.718, something, something... and so on, forever... but it really is just defined to be the base of the natural log. So you don't really need to understand that. It's just that, when you see ln... just know that it means log, base e. After doing that all the other steps are the same as what you've been doing before, so... set this equal to x. Rearrange it into exponential form. So if you rearrange this into exponential form you start with the base, which happens to be this e symbol... you raise it to the x power, and set it equal to 1. So e to the x power equals 1. And after that, just ask yourself "What power would ever make this equal to 1?" And the only one that works is 0. Remember, a number raised to the 0 power is 1, always so x has to be 0... and since x is 0, it also means that ln of 1 is also 0. Here's one more natural log problem. ln(e^3), so natural log of e to the third power or natural log of e^3. This looks complicated. I promise it's going to be very easy for you. I still think the best way to do this is to re-write the ln as log, base e, so let's do that. Looks a little strange, but that's what that stands for. From here, just use the same steps. So how did I get this? Same steps as before. Take the base e, raise it to the right side power, x... and set it equal to whatever was in the middle of the log in the argument of the log, which happened to be e^3. So whatever that was, that's what you set it equal to. And after you do that all you need to do is match theses, compare them and see that x has to be 3. And since x has to be 3, this whole log is just equal to 3. So that whole expression was just equal to 3. Now there's one more kind of log I want to show you and that is when you have an x somewhere in the log expression in the base or in the argument. So it's a little weirder, but it's the last type, I promise. So let's look at those. Alright, here are some even weirder types I wanna show you. They're even weirder, because... they might throw you off if you see them. They're a little more confusing, because like in these two... the x is inside the log expression. It's the base here. And it's the argument, the middle number, here. And then this last one is super weird. It will mean that you have to use the change-of-base formula which you might have heard of. But let's look at these two. If you see one that says "solve" and it's a log equation with x inside the log... I know it looks different, and... solving log equations is actually a whole other topic that could take a whole other video, but I wanted to give you a taste of it because I know it looks a lot like what we were doing and I don't want you to be thrown off if you see it. You really can use the same steps as before. So try it. For both of these... if you see something like this even though there's a variable inside the log you really should try the same steps because you'll probably get something that's a lot easier to figure out. So let's try it for the first one. So when you rearrange it, you take the base, x... raise it to the 5th power, on the right... and set it equal to 32. x^5 = 32 And then looking at that, you can tell that x has to be 2 because 2 to the 5th power equals 32. And that really is the answer. x equals 2. So just try the same steps as before and you may be able to figure out what the solution is. Same thing for this one. Rearrange it... so you get base 5, raised to the power of 3, equals x because x was the middle number. And this is even faster. You just need to figure out what 5^3 is. 5 times 5 times 5 is 125... so the answer is x equals 125. So if you see one like that, just try the same steps, that's... that's the bottom line. And then this last type... calculate log, base 2, of 7... log, base 2, of 7, I mean that looks a lot like what we were doing before and you should try to rearrange it the same way. So if you do... if you do, you get 2^x = 7 and I can't think of a number that makes that true. I don't know what that number is. If this were 8, it would be 2 to the 3rd power... but there's no nice, neat integer number that works. It's going to be a decimal that's hard to find by hand. So what you have to do, if you run into that is use the change-of-base formula... and whatever log you had, log, base 2, of 7... the change-of-base formula lets you re-write it. So you can re-write it, and it becomes log of 7, the larger number, over log of 2, the base number. Log of 7 over log of 2 and you can use whatever base you want. This is just like the default 10 but you plug this into a calculator, and you do that division and whatever decimal you get is the answer for the log. I'm showing you that in case you get one like that where there aren't these like nice, neat numbers that you can do by hand so that's important, in case that comes up. So I hope that helped you understand logs. I know logs are everyone's favorite. It's OK, you don't have to like math but you can like my video so if you did, please click 'Like' or subscribe.
188865
https://math.stackexchange.com/questions/4289475/betting-on-the-appearance-of-hht-or-hth-in-a-series-of-coin-flips
probability - Betting on the appearance of HHT or HTH in a series of coin flips - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Betting on the appearance of HHT or HTH in a series of coin flips Ask Question Asked 3 years, 11 months ago Modified3 years, 11 months ago Viewed 10k times This question shows research effort; it is useful and clear 10 Save this question. Show activity on this post. Suppose that we bet on the outcomes of coin flips in the following way. Each of us chooses a different series of three flips, and we flip a coin until three consecutive flips, in order, match your choice or mine. For example, you might pick HHT while I pick HTH, and we flip until either HHT or HTH. Is one of us more likely to win? My friend Shimon argues as follows. If you pick HHT and I pick HTH, a long stream of flips will sooner or later include HH, the first two flips in your choice; and at that moment you have won, because there's no way that later flips can produce my HTH streak before it produces your HHT streak. But no corresponding situation exists for the other side; should HT, the first two flips in my choice, arise, it's still possible for you to win, because the next flips could be THHT, so that the six flips HTTHHT would be another win for you. I consider this total nonsense. It's obvious that the next three flips are as likely to be HHT, or HTT, or any of the six other possible sets of three flips. His approach of asking whether wins are possible given certain strings of past flips is mind-boggling, working without any clear sample-space and defying mathematics. I suggested a simulation. The simulation I produced is an Excel spreadsheet that flips sets of 65 flips. In each set, it finds the first flip that begins your choice or mine, and assigns a winner. The following screen shot shows that simulation. Blue is associated with HHT, and pink is associated with HTH. In the first set, HHT began in flips 2, 10, 16, 20, 29, 36 and 50. HTH began in flips 3, 5, 11, 13, 21 and 51. Because HHT arose first, there's a 1 in the blue vertical bar beside the first set. As you may be able to see, HHT won 8 sets of the pictured 10 sets. In my little experiment, HHT won far more often, and often by dramatic margins. I've looked around this stack exchange, and found a few questions almost exactly like mine – like this one and this one -- and several questions at least very similar to mine -- like this one and this one. They all seem to believe that Shimon is right, and some sequences are more likely than others. But despite that seeming (if not very clearly stated) unanimity, and despite my own simulation, I just cannot believe it. Obviously every possible set of three flips is equally likely. So I guess that my question is, can someone explain this at a more intuitive level, reaching the mistake that's misleading me or (I still think) Shimon? probability intuition Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications asked Oct 27, 2021 at 23:34 ChaimChaim 639 5 5 silver badges 17 17 bronze badges 12 6 This is called Penney's Game and there's a lot of information regarding it online.lulu –lulu 2021-10-27 23:40:07 +00:00 Commented Oct 27, 2021 at 23:40 8 As an obvious case that might help intuition, suppose A A chooses T T T T T T and B B chooses H T T H T T. Then convince yourself that the only way A A can possibly win is if the coin comes up T T T T T T initially, so A′s A′s chance of winning is only 1 8 1 8.lulu –lulu 2021-10-27 23:41:27 +00:00 Commented Oct 27, 2021 at 23:41 Start with HH, next flip H is bad, but then followed by a T gives HHT a win. However start with HT followed by a T (bad), but nether can win on the next flip.herb steinberg –herb steinberg 2021-10-27 23:48:18 +00:00 Commented Oct 27, 2021 at 23:48 Undoubtedly this has been asked and answered on this site before, probably several times. E.g., math.stackexchange.com/questions/3853515/… and math.stackexchange.com/questions/954395/… and math.stackexchange.com/questions/3856649/… and math.stackexchange.com/questions/3390215/…Gerry Myerson –Gerry Myerson 2021-10-27 23:57:33 +00:00 Commented Oct 27, 2021 at 23:57 1 That's a different question. Each 3 3 character string is equally likely to come up so that game is fair (but not terribly interesting). Study the example I gave you, or look up any of the online references.lulu –lulu 2021-10-28 00:01:52 +00:00 Commented Oct 28, 2021 at 0:01 |Show 7 more comments 4 Answers 4 Sorted by: Reset to default This answer is useful 11 Save this answer. Show activity on this post. Let's say I'm betting on HHT and you're betting on HTH. I claim the odds are 2 2 to 1 1 in my favor. Look at it this way. Sooner or later the coin is going to come up H for the first time. Let's consider what happens on the next two flips after that first head. There are four cases. HHT I win. HHH I win; sooner or later the coin will come up T. HTH You win. HTT We start over. The following parable may or may not illuminate the error in your thinking. An underground train follows a circular route, stopping in turn at stations A, B, C, . . . X, Y, Z, A, B, C, . . . You and your friend stagger on board and fall asleep. Some time later you both wake up, between stations, having no idea where you got on or where you are now. You: I wonder where we are. I get off at Q. Your friend: I bet you ten bucks we hit my stop before yours. You: Hmm. One station is just as likely as another. That's a fair bet. You're on. Which is your stop? Your friend: P. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Oct 29, 2021 at 1:23 answered Oct 28, 2021 at 1:31 bofbof 82.8k 7 7 gold badges 106 106 silver badges 184 184 bronze badges 1 3 A nice example indeed. It's similar to this question.justhalf –justhalf 2021-10-28 08:18:15 +00:00 Commented Oct 28, 2021 at 8:18 Add a comment| This answer is useful 9 Save this answer. Show activity on this post. I feel your pain: you are getting many arguments that explain why your friend is correct, and links to previous stack exchange posts that also side with your friend's conclusion. Even your own simulations confirm your friend is correct! However, what (if anything!) is wrong with your reasoning to the contrary?!? The provided arguments and findings may show that your reasoning is incorrect, but none of them explain why or where your reasoning is incorrect. So, instead of providing yet another argument why your friend’s reasoning is correct, let me see if I can get to the heart as to why your reasoning is incorrect. I think that what lies at the core of your reasoning is the idea that ‘at any point in a long sequence of heads and tails, any specific sequence of three heads and tails is just as possible as any other’. And yes, that is true in the sense that if you start at that point X in the sequence, and only look at the next three entries of that sequence. Indeed, this is just like when you say: It's obvious that the next three flips are as likely to be HHT, or HTT, or any of the six other possible sets of three flips. However, whether some specific sequence of three will appear before some other sequence of three also depends on whatever entries you have before that point X. Thus we get your friend’s reasoning: if the last two outcomes that you obtained before X were both heads, then the very next entry could end the game right then and there. So, your mistake could be seen as the result of an ambiguity in the claim of ‘getting a specific sequence from some point on’. Again, if you mean this to be the event of getting a specific sequence starting from a point in the sequence, then you are right: starting at that point in the sequence, any sequence is just as likely to appear as any other. But what we are after is the event of getting a specific sequence from that point in the game. And, at any point in the game, the outcomes that happened before you reached the point of the sequence that you are at at that point in the game make a difference. I believe the conflation of these two different senses of ‘from that point on’ lies at the heart of the mistake in your reasoning. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Oct 29, 2021 at 3:26 answered Oct 28, 2021 at 0:41 Bram28Bram28 104k 6 6 gold badges 76 76 silver badges 123 123 bronze badges Add a comment| This answer is useful 2 Save this answer. Show activity on this post. Your intuition is correct in that as many HTH as HHT will appear in any long enough sequence. If you count those occurences in your simulations, you may confirm that result, and if you play with your friend a (boring) game of "we flip a coin 1000 times, we count how many HHT and how many HTH there are, and whoever has the most wins", this is a fair game. However, what you are playing is a different game : you are playing "what sequence comes first" ; and this game is not fair. You don't win every time HTH appears: You win when HTH appears and HHT has not appeared yet. You opponent wins when HHT appears and HTH has not appeared yet. The key here is to see that the events (the winning cases) are not independant. HHT and HTH appear just as often, but half of the time HTH appears just one flip after HHT. As a suggestion to help your intuition, let's suppose no one wins in the first three coin flips (this remove 1/4 of the tries, half of them wins and the other half losses). Now consider the first HTH of the sequence and ask yourself what was the previous flip: Half of the time, it will be a H, making the pattern ...HHTH. You lose because HHT appeared just one position before HTH ! Half of the time, it will be a T, making the pattern ...THTH. It all depends whether HHT is to be found beforehand or not. You may win, or not. Symmetry is broken because the two pattern HHT and HTH are not independant. The calculations in the other answers are correct : the odds are against you 2:1. And if you want to trick your opponent, offer a game of "which sequence comes second"... Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered Oct 28, 2021 at 8:55 EvargaloEvargalo 2,652 9 9 silver badges 11 11 bronze badges Add a comment| This answer is useful 1 Save this answer. Show activity on this post. I already posted one possible explanation as to where and how your intuition is taking you astray, but maybe what you are doing is this: Again, you correctly point out that at any specific location of any long sequence, and specific sequence is just as likely to occur as any other. This also implies that the number of those sub-sequences you get inside any longer sequence is exactly the same for any sub-sequence. [this is something you can confirm with your simulations: you should get (ignoring small random variations) just as many HHT's and HTH's in all those sequences you looked at.] As such, it certainly makes intuitive sense that the average first location of any subsequence inside a larger sequence would not differ between the different subsequences: there are just as many HTH's as HHT's, so if we just randomly sprinkle those into the larger sequence, then why would one, on average, appear earlier than any other? This seems to be along the lines that you reason, but unfortunately it does not work. As the Answer by @Evargalo shows: the events of HHT and HTH occurring in a long sequence are not independent. But it is also true that the event of one HTH event occurring in a long sequence is not independent from the occurrence of a second HTH either, because once you have a HTH occurrence, then you already have the first H of a second HTH event. Indeed, what happens as a result is that all the HTH occurrences in a sequence will tend to be more clustered/clumped together in comparison to the occurrences of HHT events [again, this is something you could try and quantify in your simulations as well, but just eye-balling your graph, I think you can see that the HTH events are more clustered than the HHT events ... but also have longer stretches of having no such occurrences in between ... so you could look at the standard deviation of the distances of their occurrences: you should find that for HTH that's higher than for HHT] On the other hand, because there is no overlap possible between any two HHT sequences, their occurrences will 'push' each other away from each other, and will be more evenly distributed. And, as a result of that, it stands to reason that the occurrence of the first HTH event would indeed be further into the sequence than the first HHT event. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Oct 29, 2021 at 2:29 answered Oct 28, 2021 at 20:04 Bram28Bram28 104k 6 6 gold badges 76 76 silver badges 123 123 bronze badges Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions probability intuition See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Linked 4Probability of a sequence of coin tosses ending in H H T H H T (I win) or T H H T H H(You win) 4Expected value of flips until HHT consecutively 3Flip a coin until getting HTH 2HHT and HTH in tossing a coin 1Calculating the expected winner of a Penney's Game using a Markov Chain. 1Probability of obtaining H H T H H T before H T H H T H is different from obtaining H T H T before T H T H 1What is expected number of throws in the Penney's game? 1An infinite Penney's game. Related 1Expected number of coin flips before a player's total score reaches 3 3 7The hot hand and coin flips after a sequence of heads 2Expected value of coin flips 1Independent Events and Coin Flips 2HHT and HTH in tossing a coin 0Why is it more likely to get 2 heads and 1 tail when flipping a coin three times? (Without thinking of TTT vs HTH+HHT+THH) 3Is the following coin-toss problem analogous to Monty-Hall? Hot Network Questions Childhood book with a girl obsessed with homonyms who adopts a stray dog but gives it back to its owners Direct train from Rotterdam to Lille Europe Storing a session token in localstorage Passengers on a flight vote on the destination, "It's democracy!" Calculating the node voltage How to rsync a large file by comparing earlier versions on the sending end? Discussing strategy reduces winning chances of everyone! My dissertation is wrong, but I already defended. How to remedy? For every second-order formula, is there a first-order formula equivalent to it by reification? Who is the target audience of Netanyahu's speech at the United Nations? Why is a DC bias voltage (V_BB) needed in a BJT amplifier, and how does the coupling capacitor make this possible? Where is the first repetition in the cumulative hierarchy up to elementary equivalence? How to fix my object in animation Proof of every Highly Abundant Number greater than 3 is Even How many color maps are there in PBR texturing besides Color Map, Roughness Map, Displacement Map, and Ambient Occlusion Map in Blender? Is encrypting the login keyring necessary if you have full disk encryption? Do we declare the codomain of a function from the beginning, or do we determine it after defining the domain and operations? What is a "non-reversible filter"? I have a lot of PTO to take, which will make the deadline impossible Vampires defend Earth from Aliens What is the feature between the Attendant Call and Ground Call push buttons on a B737 overhead panel? How to solve generalization of inequality problem using substitution? How can the problem of a warlock with two spell slots be solved? Why multiply energies when calculating the formation energy of butadiene's π-electron system? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.29.34589 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
188866
https://en.wikipedia.org/wiki/Landau_theory
Landau theory - Wikipedia Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Appearance move to sidebar hide Text Small Standard Large This page always uses small font size Width Standard Wide The content is as wide as possible for your browser window. Color (beta) Automatic Light Dark This page is always in light mode. Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk [x] Toggle the table of contents Contents move to sidebar hide (Top) 1 Mean-field formulation (no long-range correlation)Toggle Mean-field formulation (no long-range correlation) subsection 1.1 Second-order transitions 1.2 Irreducible representations 1.3 Applied fields 1.4 First-order transitions 1.4.1 I. Symmetric Case 1.4.2 II. Nonsymmetric Case 1.5 Applications 2 Including long-range correlations 3 See also 4 Footnotes 5 Further reading Landau theory [x] 7 languages Deutsch Español Esperanto Français עברית Русский Українська Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Edit interlanguage links Print/export Download as PDF Printable version In other projects Wikidata item From Wikipedia, the free encyclopedia Theory of continuous phase transitions Not to be confused with Ginzburg–Landau theory. Landau theory (also known as Ginzburg–Landau theory, despite the confusing name) in physics is a theory that Lev Landau introduced in an attempt to formulate a general theory of continuous (i.e., second-order) phase transitions. It can also be adapted to systems under externally-applied fields, and used as a quantitative model for discontinuous (i.e., first-order) transitions. Although the theory has now been superseded by the renormalization group and scaling theory formulations, it remains an exceptionally broad and powerful framework for phase transitions, and the associated concept of the order parameter as a descriptor of the essential character of the transition has proven transformative. Mean-field formulation (no long-range correlation) [edit] Main article: Mean field theory Landau was motivated to suggest that the free energy of any system should obey two conditions: Be analytic in the order parameter and its gradients. Obey the symmetry of the Hamiltonian. Given these two conditions, one can write down (in the vicinity of the critical temperature, T c) a phenomenological expression for the free energy as a Taylor expansion in the order parameter. Second-order transitions [edit] Sketch of free energy as a function of order parameter η{\displaystyle \eta } Consider a system that breaks some symmetry below a phase transition, which is characterized by an order parameter η{\displaystyle \eta }. This order parameter is a measure of the order before and after a phase transition; the order parameter is often zero above some critical temperature and non-zero below the critical temperature. In a simple ferromagnetic system like the Ising model, the order parameter is characterized by the net magnetization m{\displaystyle m}, which becomes spontaneously non-zero below a critical temperature T c{\displaystyle T_{c}}. In Landau theory, one considers a free energy functional that is an analytic function of the order parameter. In many systems with certain symmetries, the free energy will only be a function of even powers of the order parameter, for which it can be expressed as the series expansion F(T,η)−F 0=a(T)η 2+b(T)2 η 4+⋯{\displaystyle F(T,\eta )-F_{0}=a(T)\eta ^{2}+{\frac {b(T)}{2}}\eta ^{4}+\cdots } In general, there are higher order terms present in the free energy, but it is a reasonable approximation to consider the series to fourth order in the order parameter, as long as the order parameter is small. For the system to be thermodynamically stable (that is, the system does not seek an infinite order parameter to minimize the energy), the coefficient of the highest even power of the order parameter must be positive, so b(T)>0{\displaystyle b(T)>0}. For simplicity, one can assume that b(T)=b 0{\displaystyle b(T)=b_{0}}, a constant, near the critical temperature. Furthermore, since a(T){\displaystyle a(T)} changes sign above and below the critical temperature, one can likewise expand a(T)≈a 0(T−T c){\displaystyle a(T)\approx a_{0}(T-T_{c})}, where it is assumed that a>0{\displaystyle a>0} for the high-temperature phase while a<0{\displaystyle a<0} for the low-temperature phase, for a transition to occur. With these assumptions, minimizing the free energy with respect to the order parameter requires ∂F∂η=2 a(T)η+2 b(T)η 3=0{\displaystyle {\frac {\partial F}{\partial \eta }}=2a(T)\eta +2b(T)\eta ^{3}=0} The solution to the order parameter that satisfies this condition is either η=0{\displaystyle \eta =0}, or η 0 2=−a b=−a 0 b 0(T−T c){\displaystyle \eta {0}^{2}=-{\frac {a}{b}}=-{\frac {a{0}}{b_{0}}}(T-T_{c})} Order parameter and specific heat as a function of temperature It is clear that this solution only exists for T, otherwise η=0{\displaystyle \eta =0} is the only solution. Indeed, η=0{\displaystyle \eta =0} is the minimum solution for T>T c{\displaystyle T>T_{c}}, but the solution η 0{\displaystyle \eta {0}} minimizes the free energy for T<T c{\displaystyle T<T{c}}, and thus is a stable phase. Furthermore, the order parameter follows the relation η(T)∝|T−T c|1/2{\displaystyle \eta (T)\propto \left|T-T_{c}\right|^{1/2}} below the critical temperature, indicating a critical exponentβ=1/2{\displaystyle \beta =1/2} for this Landau mean-theory model. The free-energy will vary as a function of temperature given by F−F 0={−a 0 2 2 b 0(T−T c)2,TT c{\displaystyle F-F_{0}={\begin{cases}-{\dfrac {a_{0}^{2}}{2b_{0}}}(T-T_{c})^{2},&TT_{c}\end{cases}}} From the free energy, one can compute the specific heat, c p=−T∂2 F∂T 2={a 0 2 b 0 T,TT c{\displaystyle c_{p}=-T{\frac {\partial ^{2}F}{\partial T^{2}}}={\begin{cases}{\dfrac {a_{0}^{2}}{b_{0}}}T,&TT_{c}\end{cases}}} which has a finite jump at the critical temperature of size Δ c=a 0 2 T c/b 0{\displaystyle \Delta c=a_{0}^{2}T_{c}/b_{0}}. This finite jump is therefore not associated with a discontinuity that would occur if the system absorbed latent heat, since T c Δ S=0{\displaystyle T_{c}\Delta S=0}. It is also noteworthy that the discontinuity in the specific heat is related to the discontinuity in the second derivative of the free energy, which is characteristic of a second-order phase transition. Furthermore, the fact that the specific heat has no divergence or cusp at the critical point indicates its critical exponent for c∼|T−T c|−α{\displaystyle c\sim |T-T_{c}|^{-\alpha }} is α=0{\displaystyle \alpha =0}. Irreducible representations [edit] Landau expanded his theory to consider the restraints that it imposes on the symmetries before and after a transition of second order. They need to comply with a number of requirements: The distorted (or ordered) symmetry needs to be a subgroup of the higher one. The order parameter that embodies the distortion needs to transform as a single irreducible representation (irrep) of the parent symmetry The irrep should not contain a third order invariant If the irrep allows for more than one fourth order invariant, the resulting symmetry minimizes a linear combination of these invariants In the latter case more than one daughter structure should be reacheable through a continuous transition. A good example of this are the structure of MnP (space group Cmca) and the low temperature structure of NbS (space group P6 3 mc). They are both daughters of the NiAs-structure and their distortions transform according to the same irrep of that spacegroup. Applied fields [edit] In many systems, one can consider a perturbing field h{\displaystyle h} that couples linearly to the order parameter. For example, in the case of a classical dipole momentμ{\displaystyle \mu }, the energy of the dipole-field system is −μ B{\displaystyle -\mu B}. In the general case, one can assume an energy shift of −η h{\displaystyle -\eta h} due to the coupling of the order parameter to the applied field h{\displaystyle h}, and the Landau free energy will change as a result: F(T,η)−F 0=a 0(T−T c)η 2+b 0 2 η 4−η h{\displaystyle F(T,\eta )-F_{0}=a_{0}(T-T_{c})\eta ^{2}+{\frac {b_{0}}{2}}\eta ^{4}-\eta h} In this case, the minimization condition is ∂F∂η=2 a(T)η+2 b 0 η 3−h=0{\displaystyle {\frac {\partial F}{\partial \eta }}=2a(T)\eta +2b_{0}\eta ^{3}-h=0} One immediate consequence of this equation and its solution is that, if the applied field is non-zero, then the magnetization is non-zero at any temperature. This implies there is no longer a spontaneous symmetry breaking that occurs at any temperature. Furthermore, some interesting thermodynamic and universal quantities can be obtained from this above condition. For example, at the critical temperature where a(T c)=0{\displaystyle a(T_{c})=0}, one can find the dependence of the order parameter on the external field: η(T c)=(h 2 b 0)1/3∝h 1/δ{\displaystyle \eta (T_{c})=\left({\frac {h}{2b_{0}}}\right)^{1/3}\propto h^{1/\delta }} indicating a critical exponent δ=3{\displaystyle \delta =3}. Zero-field susceptibility as a function of temperature near the critical temperature Furthermore, from the above condition, it is possible to find the zero-field susceptibility χ≡∂η/∂h|h=0{\displaystyle \chi \equiv \partial \eta /\partial h|_{h=0}}, which must satisfy 0=2 a∂η∂h+6 b η 2∂η∂h−1{\displaystyle 0=2a{\frac {\partial \eta }{\partial h}}+6b\eta ^{2}{\frac {\partial \eta }{\partial h}}-1}[2 a+6 b η 2]∂η∂h=1{\displaystyle [2a+6b\eta ^{2}]{\frac {\partial \eta }{\partial h}}=1} In this case, recalling in the zero-field case that η 2=−a/b{\displaystyle \eta ^{2}=-a/b} at low temperatures, while η 2=0{\displaystyle \eta ^{2}=0} for temperatures above the critical temperature, the zero-field susceptibility therefore has the following temperature dependence: χ(T,h→0)={1 2 a 0(T−T c),T>T c 1−4 a 0(T−T c),TT_{c}\{\frac {1}{-4a_{0}(T-T_{c})}},&T<T_{c}\end{cases}}\propto |T-T_{c}|^{-\gamma }} which is reminiscent of the Curie-Weiss law for the temperature dependence of magnetic susceptibility in magnetic materials, and yields the mean-field critical exponent γ=1{\displaystyle \gamma =1}. It is noteworthy that although the critical exponents so obtained are incorrect for many models and systems, they correctly satisfy various exponent equalities such as the Rushbrooke equality: α+2 β+γ=2{\displaystyle \alpha +2\beta +\gamma =2}. First-order transitions [edit] Landau theory can also be used to study first-order transitions. There are two different formulations, depending on whether or not the system is symmetric under a change in sign of the order parameter. I. Symmetric Case [edit] Here we consider the case where the system has a symmetry and the energy is invariant when the order parameter changes sign. A first-order transition will arise if the quartic term in F{\displaystyle F} is negative. To ensure that the free energy remains positive at large η{\displaystyle \eta }, one must carry the free-energy expansion to sixth-order, F(T,η)=A(T)η 2−B 0 η 4+C 0 η 6,{\displaystyle F(T,\eta )=A(T)\eta ^{2}-B_{0}\eta ^{4}+C_{0}\eta ^{6},} where A(T)=A 0(T−T 0){\displaystyle A(T)=A_{0}(T-T_{0})}, and T 0{\displaystyle T_{0}} is some temperature at which A(T){\displaystyle A(T)} changes sign. We denote this temperature by T 0{\displaystyle T_{0}} and not T c{\displaystyle T_{c}}, since it will emerge below that it is not the temperature of the first-order transition, and since there is no critical point, the notion of a "critical temperature" is misleading to begin with. A 0,B 0,{\displaystyle A_{0},B_{0},} and C 0{\displaystyle C_{0}} are positive coefficients. We analyze this free energy functional as follows: (i) For T>T 0{\displaystyle T>T_{0}}, the η 2{\displaystyle \eta ^{2}} and η 6{\displaystyle \eta ^{6}} terms are concave upward for all η{\displaystyle \eta }, while the η 4{\displaystyle \eta ^{4}} term is concave downward. Thus for sufficiently high temperatures F{\displaystyle F} is concave upward for all η{\displaystyle \eta }, and the equilibrium solution is η=0{\displaystyle \eta =0}. (ii) For T<T 0{\displaystyle T<T_{0}}, both the η 2{\displaystyle \eta ^{2}} and η 4{\displaystyle \eta ^{4}} terms are negative, so η=0{\displaystyle \eta =0} is a local maximum, and the minimum of F{\displaystyle F} is at some non-zero value ±η 0(T){\displaystyle \pm \eta {0}(T)}, with F(T 0,η 0(T 0))<0{\displaystyle F(T{0},\eta {0}(T{0}))<0}. (iii) For T{\displaystyle T} just above T 0{\displaystyle T_{0}}, η=0{\displaystyle \eta =0} turns into a local minimum, but the minimum at η 0(T){\displaystyle \eta {0}(T)} continues to be the global minimum since it has a lower free energy. It follows that as the temperature is raised above T 0{\displaystyle T{0}}, the global minimum cannot continuously evolve from η 0(T){\displaystyle \eta {0}(T)} to 0. Rather, at some intermediate temperature T∗{\displaystyle T{}}, the minima at η 0(T∗){\displaystyle \eta {0}(T{})} and η=0{\displaystyle \eta =0} must become degenerate. For T>T∗{\displaystyle T>T_{}}, the global minimum will jump discontinuously from η 0(T∗){\displaystyle \eta {0}(T{})} to 0. To find T∗{\displaystyle T_{}}, we demand that free energy be zero at η=η 0(T∗){\displaystyle \eta =\eta {0}(T{})} (just like the η=0{\displaystyle \eta =0} solution), and furthermore that this point should be a local minimum. These two conditions yield two equations, 0=A(T)η 2−B 0 η 4+C 0 η 6,{\displaystyle 0=A(T)\eta ^{2}-B_{0}\eta ^{4}+C_{0}\eta ^{6},}0=2 A(T)η−4 B 0 η 3+6 C 0 η 5,{\displaystyle 0=2A(T)\eta -4B_{0}\eta ^{3}+6C_{0}\eta ^{5},} First-order phase transition demonstrated in the discontinuity of the order parameter as a function of temperature which are satisfied when η 2(T∗)=B 0/2 C 0{\displaystyle \eta ^{2}(T_{})={B_{0}}/{2C_{0}}}. The same equations also imply that A(T∗)=A 0(T∗−T 0)=B 0 2/4 C 0{\displaystyle A(T_{})=A_{0}(T_{}-T_{0})=B_{0}^{2}/4C_{0}}. That is, T∗=T 0+B 0 2 4 A 0 C 0.{\displaystyle T_{}=T_{0}+{\frac {B_{0}^{2}}{4A_{0}C_{0}}}.} From this analysis both points made above can be seen explicitly. First, the order parameter suffers a discontinuous jump from (B 0/2 C 0)1/2{\displaystyle (B_{0}/2C_{0})^{1/2}} to 0. Second, the transition temperature T∗{\displaystyle T_{}} is not the same as the temperature T 0{\displaystyle T_{0}} where A(T){\displaystyle A(T)} vanishes. At temperatures below the transition temperature, T<T∗{\displaystyle T<T_{}}, the order parameter is given by η 0 2=B 0 3 C 0[1+1−3 A(T)C 0 B 0 2]{\displaystyle \eta {0}^{2}={\frac {B{0}}{3C_{0}}}\left[1+{\sqrt {1-{\frac {3A(T)C_{0}}{B_{0}^{2}}}}}\right]} which is plotted to the right. This shows the clear discontinuity associated with the order parameter as a function of the temperature. To further demonstrate that the transition is first-order, one can show that the free energy for this order parameter is continuous at the transition temperature T∗{\displaystyle T_{}}, but its first derivative (the entropy) suffers from a discontinuity, reflecting the existence of a non-zero latent heat. II. Nonsymmetric Case [edit] Next we consider the case where the system does not have a symmetry. In this case there is no reason to keep only even powers of η{\displaystyle \eta } in the expansion of F{\displaystyle F}, and a cubic term must be allowed (The linear term can always be eliminated by a shift η→η{\displaystyle \eta \to \eta } + constant.) We thus consider a free energy functional F(T,η)=A(T)η 2−C 0 η 3+B 0 η 4+⋯.{\displaystyle F(T,\eta )=A(T)\eta ^{2}-C_{0}\eta ^{3}+B_{0}\eta ^{4}+\cdots .} Once again A(T)=A 0(T−T 0){\displaystyle A(T)=A_{0}(T-T_{0})}, and A 0,B 0,C 0{\displaystyle A_{0},B_{0},C_{0}} are all positive. The sign of the cubic term can always be chosen to be negative as we have done by reversing the sign of η{\displaystyle \eta } if necessary. We analyze this free energy functional as follows: (i) For T<T 0{\displaystyle T<T_{0}}, we have a local maximum at η=0{\displaystyle \eta =0}, and since the free energy is bounded below, there must be two local minima at nonzero values η−(T)<0{\displaystyle \eta {-}(T)<0} and η+(T)>0{\displaystyle \eta {+}(T)>0}. The cubic term ensures that η+{\displaystyle \eta {+}} is the global minimum since it is deeper. (ii) For T{\displaystyle T} just above T 0{\displaystyle T{0}}, the minimum at η−{\displaystyle \eta {-}} disappears, the maximum at η=0{\displaystyle \eta =0} turns into a local minimum, but the minimum at η+{\displaystyle \eta {+}} persists and continues to be the global minimum. As the temperature is further raised, F(T,η+(T)){\displaystyle F(T,\eta {+}(T))} rises until it equals zero at some temperature T∗{\displaystyle T{}}. At T∗{\displaystyle T_{}} we get a discontinuous jump in the global minimum from η+(T∗){\displaystyle \eta {+}(T{})} to 0. (The minima cannot coalesce for that would require the first three derivatives of F{\displaystyle F} to vanish at η=0{\displaystyle \eta =0}.) To find T∗{\displaystyle T_{}}, we demand that free energy be zero at η=η+(T∗){\displaystyle \eta =\eta {+}(T{})} (just like the η=0{\displaystyle \eta =0} solution), and furthermore that this point should be a local minimum. These two conditions yield two equations, 0=A(T)η 2−C 0 η 3+B 0 η 4,{\displaystyle 0=A(T)\eta ^{2}-C_{0}\eta ^{3}+B_{0}\eta ^{4},}0=2 A(T)η−3 C 0 η 2+4 B 0 η 3,{\displaystyle 0=2A(T)\eta -3C_{0}\eta ^{2}+4B_{0}\eta ^{3},} which are satisfied when η(T∗)=C 0/2 B 0{\displaystyle \eta (T_{})={C_{0}}/{2B_{0}}}. The same equations also imply that A(T∗)=A 0(T∗−T 0)=C 0 2/4 B 0{\displaystyle A(T_{})=A_{0}(T_{}-T_{0})=C_{0}^{2}/4B_{0}}. That is, T∗=T 0+C 0 2 4 A 0 B 0.{\displaystyle T_{}=T_{0}+{\frac {C_{0}^{2}}{4A_{0}B_{0}}}.} As in the symmetric case the order parameter suffers a discontinuous jump from (C 0/2 B 0){\displaystyle (C_{0}/2B_{0})} to 0. Second, the transition temperature T∗{\displaystyle T_{}} is not the same as the temperature T 0{\displaystyle T_{0}} where A(T){\displaystyle A(T)} vanishes. Applications [edit] It was known experimentally that the liquid–gas coexistence curve and the ferromagnet magnetization curve both exhibited a scaling relation of the form |T−T c|β{\displaystyle |T-T_{c}|^{\beta }}, where β{\displaystyle \beta } was mysteriously the same for both systems. This is the phenomenon of universality. It was also known that simple liquid–gas models are exactly mappable to simple magnetic models, which implied that the two systems possess the same symmetries. It then followed from Landau theory why these two apparently disparate systems should have the same critical exponents, despite having different microscopic parameters. It is now known that the phenomenon of universality arises for other reasons (see Renormalization group). In fact, Landau theory predicts the incorrect critical exponents for the Ising and liquid–gas systems. The great virtue of Landau theory is that it makes specific predictions for what kind of non-analytic behavior one should see when the underlying free energy is analytic. Then, all the non-analyticity at the critical point, the critical exponents, are because the equilibrium value of the order parameter changes non-analytically, as a square root, whenever the free energy loses its unique minimum. The extension of Landau theory to include fluctuations in the order parameter shows that Landau theory is only strictly valid near the critical points of ordinary systems with spatial dimensions higher than 4. This is the upper critical dimension, and it can be much higher than four in more finely tuned phase transitions. In Mukhamel's analysis of the isotropic Lifschitz point, the critical dimension is 8. This is because Landau theory is a mean field theory, and does not include long-range correlations. This theory does not explain non-analyticity at the critical point, but when applied to superfluid and superconductor phase transition, Landau's theory provided inspiration for another theory, the Ginzburg–Landau theory of superconductivity. Including long-range correlations [edit] Consider the Ising model free energy above. Assume that the order parameter Ψ{\displaystyle \Psi } and external magnetic field, h{\displaystyle h}, may have spatial variations. Now, the free energy of the system can be assumed to take the following modified form: F:=∫d D x(a(T)+r(T)ψ 2(x)+s(T)ψ 4(x)+f(T)(∇ψ(x))2+h(x)ψ(x)+O(ψ 6;(∇ψ)4)){\displaystyle F:=\int d^{D}x\ \left(a(T)+r(T)\psi ^{2}(x)+s(T)\psi ^{4}(x)\ +f(T)(\nabla \psi (x))^{2}\ +h(x)\psi (x)\ \ +{\mathcal {O}}(\psi ^{6};(\nabla \psi )^{4})\right)} where D{\displaystyle D} is the total spatial dimensionality. So, ⟨ψ(x)⟩:=Tr ψ(x)e−β H Z{\displaystyle \langle \psi (x)\rangle :={\frac {{\text{Tr}}\ \psi (x){\rm {e}}^{-\beta H}}{Z}}} Assume that, for a localized external magnetic perturbation h(x)→0+h 0 δ(x){\displaystyle h(x)\rightarrow 0+h_{0}\delta (x)}, the order parameter takes the form ψ(x)→ψ 0+ϕ(x){\displaystyle \psi (x)\rightarrow \psi _{0}+\phi (x)}. Then, δ⟨ψ(x)⟩δ h(0)=ϕ(x)h 0=β(⟨ψ(x)ψ(0)⟩−⟨ψ(x)⟩⟨ψ(0)⟩){\displaystyle {\frac {\delta \langle \psi (x)\rangle }{\delta h(0)}}={\frac {\phi (x)}{h_{0}}}=\beta \left(\langle \psi (x)\psi (0)\rangle -\langle \psi (x)\rangle \langle \psi (0)\rangle \right)} That is, the fluctuation ϕ(x){\displaystyle \phi (x)} in the order parameter corresponds to the order-order correlation. Hence, neglecting this fluctuation (like in the earlier mean-field approach) corresponds to neglecting the order-order correlation, which diverges near the critical point. One can also solve for ϕ(x){\displaystyle \phi (x)}, from which the scaling exponent, ν{\displaystyle \nu }, for correlation length ξ∼(T−T c)−ν{\displaystyle \xi \sim (T-T_{c})^{-\nu }} can deduced. From these, the Ginzburg criterion for the upper critical dimension for the validity of the Ising mean-field Landau theory (the one without long-range correlation) can be calculated as: D≥2+2 β ν{\displaystyle D\geq 2+2{\frac {\beta }{\nu }}} In our current Ising model, mean-field Landau theory gives β=1/2=ν{\displaystyle \beta =1/2=\nu } and so, it (the Ising mean-field Landau theory) is valid only for spatial dimensionality greater than or equal to 4 (at the marginal values of D=4{\displaystyle D=4}, there are small corrections to the exponents). This modified version of mean-field Landau theory is sometimes also referred to as the Landau–Ginzburg theory of Ising phase transitions. As a clarification, there is also a Ginzburg–Landau theory specific to superconductivity phase transition, which also includes fluctuations. See also [edit] Ginzburg–Landau theory Landau–de Gennes theory Ginzburg criterion Stuart–Landau equation Footnotes [edit] ^Hohenberg, P. C.; Krekhov, A. P. (2015-04-04). "An introduction to the Ginzburg–Landau theory of phase transitions and nonequilibrium patterns". Physics Reports. 572: 1–42. arXiv:1410.7285. Bibcode:2015PhR...572....1H. doi:10.1016/j.physrep.2015.01.001. ISSN0370-1573. ^Lev D. Landau (1937). "On the Theory of Phase Transitions"(PDF). Zh. Eksp. Teor. Fiz. 7: 19-32. Archived from the original(PDF) on Dec 14, 2015. ^Landau, L.D.; Lifshitz, E.M. (2013). Statistical Physics. Vol.5. Elsevier. ISBN978-0080570464. ^Franzen, H.F.; Haas, C.; Jellinek, F. (1974). "Phase transitions between NiAs- and MnP-type phases". Phys. Rev. B. 10 (4): 1248–1251. Bibcode:1974PhRvB..10.1248F. doi:10.1103/PhysRevB.10.1248. ^Tolédano, J.C.; Tolédano, P. (1987). "Chapter 5: First-Order Transitions". The Landau Theory of Phase Transitions. World Scientific Publishing Company. ISBN9813103949. ^Stoof, H.T.C.; Gubbels, K.B.; Dickerscheid, D.B.M. (2009). Ultracold Quantum Fields. Springer. ISBN978-1-4020-8763-9. ^"Equilibrium Statistical Physics" by Michael Plischke, Birger Bergersen, Section 3.10, 3rd ed Further reading [edit] Landau L.D. Collected Papers (Nauka, Moscow, 1969) Michael C. Cross, Landau theory of second order phase transitions, (Caltech statistical mechanics lecture notes). Yukhnovskii, I R, Phase Transitions of the Second Order – Collective Variables Method, World Scientific, 1987, ISBN9971-5-0087-6 Retrieved from " Categories: Statistical mechanics Phase transitions Lev Landau Hidden categories: Articles with short description Short description is different from Wikidata This page was last edited on 5 August 2025, at 22:01(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Edit preview settings Search Search [x] Toggle the table of contents Landau theory 7 languagesAdd topic
188867
https://bio.libretexts.org/Under_Construction/Cell_and_Molecular_Biology_(Bergtrom)/18%3A_The_Cytoskeleton_and_Cell_Motility/18.06%3A_Actin-Myosin_Interactions_In_Vitro_-_Dissections_and_Reconstitutions
18.6: Actin-Myosin Interactions In Vitro - Dissections and Reconstitutions - Biology LibreTexts Skip to main content Table of Contents menu search Search build_circle Toolbar fact_check Homework cancel Exit Reader Mode school Campus Bookshelves menu_book Bookshelves perm_media Learning Objects login Login how_to_reg Request Instructor Account hub Instructor Commons Search Search this book Submit Search x Text Color Reset Bright Blues Gray Inverted Text Size Reset +- Margin Size Reset +- Font Type Enable Dyslexic Font - [x] Downloads expand_more Download Page (PDF) Download Full Book (PDF) Resources expand_more Periodic Table Physics Constants Scientific Calculator Reference expand_more Reference & Cite Tools expand_more Help expand_more Get Help Feedback Readability x selected template will load here Error This action is not available. chrome_reader_mode Enter Reader Mode 18: The Cytoskeleton and Cell Motility Cell and Molecular Biology (Bergtrom) { } { "18.01:Introduction" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "18.02:_Overview_of_Cytoskeletal_Filaments_and_Tubules" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "18.03:_The_Molecular_Structure_and_Organization_of_Cytoskeletal_Components" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "18.04:_Microtubules_are_Dynamic_Structures_Composed_of_Tubulin_Monomers" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "18.05:_Microfilaments-Structure_and_Role_in_Muscle_Contraction" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "18.06:_Actin-Myosin_Interactions_In_Vitro-_Dissections_and_Reconstitutions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "18.07:_Allosteric_Change_and_the_Microcontraction_Cycle_Resolves_the_Contraction_Paradox" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "18.08:_The_Microcontraction_Cycle_Resolves_the_Contraction_Paradox" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "18.09:_Ca_Ions_Regulate_Skeletal_Muscle_Contraction" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "18.10:_Actin_Mircofilaments_in_NonMuscle_Cells" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "18.11:_Both_Actins_and_Myosins_are_Encoded_by_Large_Gene_Families" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "18.12:_Intermediate_Filaments" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "18.13:_Key_Words_and_Terms" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } { "00:_Front_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "01:_Cell_Tour_Life\'s_Properties_and_Evolution_Studying_Cells" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "02:_Basic_Chemistry_Organic_Chemistry_and_Biochemistry" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "03:_Details_of_Protein_Structure" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "04:_Bioenergetics" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "05:_Enzyme_Catalysis_and_Kinetics" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "06:_Glycolysis_the_Krebs_Cycle_and_the_Atkins_Diet" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "07:_Electron_Transport_Oxidative_Phosphorylation_and_Photosynthesis" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "08:_DNA_Structure_Chromosomes_and_Chromatin" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "09:_Details_of_DNA_Replication_and_DNA_Repair" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "10:_Transcription_and_RNA_Processing" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "11:_The_Genetic_Code_and_Translation" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12:_Regulation_of_Transcription_and_Epigenetic_Inheritance" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "13:_Posttranscriptional_Regulation_of_Gene_Expression" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "14:_Repetitive_DNA_A_Eukaryotic_Genomic_Phenomenon" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "15:_DNA_Technologies" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "16:_Membrane_Structure" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "17:_Membrane_Function" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "18:_The_Cytoskeleton_and_Cell_Motility" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "19:_Cell_Division_and_the_Cell_Cycle" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "20:_The_Origins_of_Life" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "21:_Epilogue" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "zz:_Back_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } Fri, 27 May 2022 18:45:10 GMT 18.6: Actin-Myosin Interactions In Vitro - Dissections and Reconstitutions 89024 89024 Delmar Larsen { } Anonymous Anonymous 2 false false [ "article:topic", "showtoc:no", "authorname:gbergtrom" ] [ "article:topic", "showtoc:no", "authorname:gbergtrom" ] Search site Search Search Go back to previous article Sign in Username Password Sign in Sign in Sign in Forgot password Contents 1. Home 2. Under Construction 3. Cell and Molecular Biology (Bergtrom) 4. 18: The Cytoskeleton and Cell Motility 5. 18.6: Actin-Myosin Interactions In Vitro - Dissections and Reconstitutions Expand/collapse global location Cell and Molecular Biology (Bergtrom) Front Matter 1: Cell Tour, Life's Properties and Evolution, Studying Cells 2: Basic Chemistry, Organic Chemistry and Biochemistry 3: Details of Protein Structure 4: Bioenergetics 5: Enzyme Catalysis and Kinetics 6: Glycolysis, the Krebs Cycle and the Atkins Diet 7: Electron Transport, Oxidative Phosphorylation and Photosynthesis 8: DNA Structure, Chromosomes and Chromatin 9: Details of DNA Replication and DNA Repair 10: Transcription and RNA Processing 11: The Genetic Code and Translation 12: Regulation of Transcription and Epigenetic Inheritance 13: Posttranscriptional Regulation of Gene Expression 14: Repetitive DNA, A Eukaryotic Genomic Phenomenon 15: DNA Technologies 16: Membrane Structure 17: Membrane Function 18: The Cytoskeleton and Cell Motility 19: Cell Division and the Cell Cycle 20: The Origins of Life 21: Epilogue Back Matter 18.6: Actin-Myosin Interactions In Vitro - Dissections and Reconstitutions Last updated May 27, 2022 Save as PDF 18.5: Microfilaments - Structure and Role in Muscle Contraction 18.7: Allosteric Change and the Microcontraction Cycle Resolves the Contraction Paradox Page ID 89024 Gerald Bergtrom University of Wisconsin-Milwaukee ( \newcommand{\kernel}{\mathrm{null}\,}) Table of contents 1. CHALLENGE 2. CHALLENGE Several experiments hinted at the interaction of actin and myosin in contraction. For example, actomyosin was first observed as the main component of viscous skeletal muscle homogenates. Under appropriate conditions, adding ATP to such actomyosin preparations caused a decrease in viscosity. However, after the added ATP was hydrolyzed, the mixture became viscous again. Extraction of the nonviscous preparation (before it recongealed and the ATP was consumed) led to the biochemical separation of the two main substances we now recognize as the actin and myosin (thin and thick) filaments of contraction. What’s more, adding these components back together reconstituted the viscous actomyosin (now renamed actinomyosin). Adding ATP once again to the reconstituted solution eliminated its viscosity. The ATP-dependent viscosity changes of actinomyosin solutions were consistent with an ATP-dependent separation of thick and thin filaments. Do actin and myosin also separate in glycerinated muscles exposed to ATP, allowing them to stretch and relax? This question was answered with the advent of electron microscopy. The purification of skeletal muscle myosin from actin (still attached to Z-lines) is cartooned in Figure 18.17, showing what the separated components looked like in the electron microscope. Figure 18.17: Overview of an isolation of actin (thin) filaments (still on Z-lines) from myosin (thick) filaments. Next, after mixing actin Z-line and myosin fractions, electron microscopy of the resulting viscous material revealed thin filaments interdigitating with thick filaments (Figure 18.18). Figure 18.18: Reconstitution of actin filaments (on Z-lines) with myosin filaments. As expected, when ATP was added to these extracts, the solution viscosity dropped, and electron microscopy revealed that the myosin and actin filaments had again separated. The two components could again be isolated and separated by centrifugation. In yet further experiments, actinomyosin preparations could be spread on over an aqueous surface, producing a film on the surface of the water. When ATP was added to the water, the film visibly “contracted,” pulling away from the edges of the vessel, reducing its surface area! Electron microscopy of the film revealed shortened, sarcomere-like structures, with closely spaced Z-lines and short I-bands—further confirming the sliding-filament model of muscle contraction. 332 In Vitro & Electron Microscope Evidence for a Sliding-Filament Model When actin and myosin were further purified from isolated actinomyosin, the thick myosin rods could be dissociated into large myosin monomers. In fact, at ~599 Kd, myosin monomers are among the largest known proteins. Thus, thick filaments are massive polymers of huge myosin monomers! The molecular structure of myosin (thick) filaments is shown in Figure 18.19. Figure 18.19: Structure of a skeletal-muscle myosin filament and the myosin monomer. Shown is myosin II, the thick filament that spans both sides of the H zone in a sarcomere (upper). The head-and-tail structure of a myosin monomer is shown in the high-magnification electron micrograph and is illustrated in the cartoon (lower). The myosin monomer is itself a polymer of four polypeptides. An early observation of isolated mammalian actin filaments was that they had no ATPase activity. We’ve seen isolated myosin preparations do have ATPase activity, but they would catalyze ATP hydrolysis only very slowly compared to intact muscle fibers. Faster ATP hydrolysis occurred only if myosin filaments were mixed with microfilaments (either on or detached from Z-lines). In the electron microscope, isolated myosin protein monomers each appeared to have a double-head and a single-tail region. Biochemical analysis showed that the monomers themselves were composed of the two heavy-chain polypeptides and two pairs of light-chain polypeptides, as shown in the Figure 18.19 illustration. In Figure 18.20, a high magnification, high-resolution electron-micrograph simulation and corresponding drawings illustrate a myosin monomer and its component structures. Figure 18.20: Purified myosin monomers are digested with enzymes that hydrolyze peptide bonds between specific amino acids. This produces an S1 (head) fragment and a tail fragment with different properties. Proteolytic enzymes that only hydrolyze peptide linkages—and only between specific amino acids—“cut” the myosin monomers into S1 (head) and tail fragments. Shown in Figure 18.20 are electron micrographs of enzymatic digest fractions separated by ultracentrifugation. The tail fragments are parts of the two-myosin heavy-chain polypeptides. The S1 fragments consist of a pair of light chains and the rest of the heavy chains. On further analysis, the S1 myosin head fraction had a slow ATPase activity, while the tails had none. The slow activity was not an artifact of isolation; mixing the S1 fraction with isolated actin filaments resulted in a higher rate of ATP hydrolysis. Thus, the myosin heads must be ATPases that bind and interact with actin microfilaments. 333-2 Thick Filaments & Myosin Monomer Structure CHALLENGE What do you think is going on here? Why the faster catalytic rate when actin and myosin (or their parts) are mixed? In fact, S1 myosin heads bind directly to actin, decorating the actin with “arrowheads” visible in the electron microscope (see S1 Arrowheads On Muscle Actin). Even intact myosin monomers could decorate muscle actin. These results are consistent with the requirement that myosin must bind to actin to achieve a maximum rate of ATPase activity during contraction. The arrowheads on decorated actin still attached to Z-lines are illustrated in Figure 18.21. Figure 18.21: Actin decoration by myosin-monomer S1 fragments in a kind of reconstitution experiment shows an opposing polarity of actin filaments on opposite sides of the Z-line. Note that the “arrowheads” always face in opposite directions on either side of the Z-line. These opposing arrowheads suggest that the actin filaments attached to the two Z-lines of a sarcomere are drawn toward each other along the opposite sides of bipolar myosin rods. This is consistent with sliding filaments that draw Z-lines closer together during skeletal muscle contraction, shortening the sarcomeres. For another look at “arrowheads” and other aspects of muscle structure, check out the slide show at Muscle Structure & Physiology-J. Rosenbluth. 334-2 Myosin Monomers & S1 Heads Decorate Actin CHALLENGE Explain how the phenomenon of “arrowhead decoration” of sarcomeres can be interpreted to indicate actin “polarity.” This page titled 18.6: Actin-Myosin Interactions In Vitro - Dissections and Reconstitutions is shared under a not declared license and was authored, remixed, and/or curated by Gerald Bergtrom. Back to top 18.5: Microfilaments - Structure and Role in Muscle Contraction 18.7: Allosteric Change and the Microcontraction Cycle Resolves the Contraction Paradox Was this article helpful? Yes No Recommended articles 18.1: Introduction 18.2: Overview of Cytoskeletal Filaments and Tubules 18.3: The Molecular Structure and Organization of Cytoskeletal Components 18.4: Microtubules are Dynamic Structures Composed of Tubulin Monomers 18.5: Microfilaments - Structure and Role in Muscle Contraction Article typeSection or PageAuthorGerald BergtromShow TOCno Tags This page has no tags. © Copyright 2025 Biology LibreTexts Powered by CXone Expert ® ? The LibreTexts libraries arePowered by NICE CXone Expertand are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Privacy Policy. Terms & Conditions. Accessibility Statement.For more information contact us atinfo@libretexts.org. Support Center How can we help? Contact Support Search the Insight Knowledge Base Check System Status× contents readability resources tools ☰ 18.5: Microfilaments - Structure and Role in Muscle Contraction 18.7: Allosteric Change and the Microcontraction Cycle Resolves the Contraction Paradox
188868
https://www.youtube.com/watch?v=q0xN_N7l_Kw
Reflexive, Symmetric, and Transitive Relations on a Set Dr. Trefor Bazett 532000 subscribers 6412 likes Description 391411 views Posted: 9 Jul 2017 A relation from a set A to itself can be though of as a directed graph. We look at three types of such relations: reflexive, symmetric, and transitive. A relation is reflexive if every element relates to itself, that is has a little look from itself to itself. A relation is symmetric if whenever x relates to y, then y relates to x. This looks like every path between x and y has a path back. A relation is transitive if whenever xRy and yRz, then xRz (this shorthands is read "x relates to y" and so on). This looks like every two step path has a corresponding 1 step path. ►FULL DISCRETE MATH PLAYLIST: ♡♡♡SUPPORT THE CHANNEL♡♡♡ ►Support on PATREON: ►MATH BOOKS I LOVE (affiliate link): ►CURIOSITY BOX: use CODE drtrefor for 25% off awesome STEM merch boxes COURSE PLAYLISTS: ►DISCRETE MATH: ►LINEAR ALGEBRA: ►CALCULUS I: ►CALCULUS II: ►MULTIVARIABLE CALCULUS (Calc III): ►VECTOR CALCULUS (Calc IV): ►DIFFERENTIAL EQUATIONS: ►LAPLACE TRANSFORM: ►GAME THEORY: OTHER PLAYLISTS: ►Cool Math Series: ►Learning Math Series: ►LaTeX: SOCIALS: ►X/Twitter: ►TikTok: ►Instagram (photography based): 226 comments Transcript: I could also consider relations not from an a to a B but from a to itself this is perfectly valid for example we see relations or functions from say the integers to the integers all the time so I can refer to a relation on a but how should I think of this how should I sketch this well I don't have a domain and a co domain exactly or at least the domain of the codomain are the same thing so I'm only to draw one so how would I start by putting a collection of points down so these are just some points that are going to represent the elements of a and then one way to think about a relation was as an arrow diagram so you you took elements in your domain and spat out elements in your co-domain so what I'm going to do here is and I here take an element how about like this one and I where does it go and I'm going to draw an arrow to somewhere and this time I really do care about putting an arrow in because we can't use the trick that the domain and the co domain are on the left and the right so we have to indicate where I'm starting where and finishing so when I draw an arrow I say I'd starting at this point and I'm finishing over here maybe I start at this point and I finish over here and maybe I start at this point and then I finish right back at itself and maybe this one moves around back to itself as well this kind of construct that I'm drawing where I put my points and I draw arrows between those points which is a relation on a an arrow diagram that starts something the name finishes on things in a this is also referred to as a directed graph where you listed your various elements and then you draw directed arrows directed means has an arrow hat on it just not just the connection between those various elements you get some graph between this set a we then have a list of different properties that apply to relations on a set one of them is this property of reflexive so we're going to say relation is reflexive if it relates to itself or in other words if X relates to X for all values of X inside of the a so for instance if I was going to have maybe I'm just going to put three points here in my set so this is a set with just three things in it to claim it's reflexive means that that any one of these three X values is related to itself and related to itself means there's an arrow that starts there and finishes there which is the little loopy things that I draw with an arrow on them so this is saying this points related to itself it's saying that this point is going to be related to itself and it's going to say that this point is going to be related to itself as well so all three points are related to themselves now it might be the case that there's other relations perhaps this particular one is related there and maybe there's more maybe there's not but the other connections don't matter to be reflexive all that matters that every point has one of these little loops around it the next property we're going to talk about is called symmetric so one of these relations on a is going to be symmetric if it has the following property that it's extra related to Y but Y is related to X or in other words for every payer that I might have any two elements how about these two elements here then if there's a relation in the one direction the X relates to the Y then the Y relates to the X as well as drawn this particular directed graph that I have this relation on a is not symmetric and it fails to be symmetric because if this point here is my X and this point here is my Y then I have a relationship between x and y X is related to Y here because of this arrow connecting them but Y is not related to n because there's not an arrow that starts at the Y and finishes at the X however if I modify this and I came in and put an arrow like this then it wouldn't become symmetric because the village of Y and the Y was related to X and note that the fact that this V that I have over here is completely disconnected doesn't matter because V isn't related to either the X or the Y I don't have to worry about there being a relation back so the idea is this it is symmetric if every time you have an arrow in the one direction it immediately turns around as an arrow coming right back the final property they're going to talk about is transitivity and the idea here is this suppose it's the case that you've got three different points and I've got an X or Y and B may well label them here I got an X I've got a Y and I got a B and what you have is the following you've got that X is related to Y I've already got that X is related Y good matches and y is related to Z well I don't yet have that drawn in so let me put it on I'm going to say Y is related down here to Z so X goes to Y I've got a relation there Y goes to Z I have a relation there then what I have is that X is directly related to Z so here's the idea I can get from X to Z in two steps I can go from X to Y along the green path and then why does e along the pink path but if it's transitive every time I can get to something by a long path I can get there by a short one as well as it I can go directly from my X to my Y and now this directed graph I have is transitive because I can go from my X up to my Y and then my Y down to the Z and when I have that cycle I can then go directly from X to Z note that there's also another loop that happens to be in here from our previous description notice how the Y was related to the X I can go from Y to X and then from what we just broke down I can go from X to Z as well so there's a two-step path here I can go from Y to X and X to Z Y to X and X is e but I can also go directly there's also a Y disease so we've always got to be careful as we start adding in more more paths there's more and more ways that it could fail to be trans enough so what you have to verify is that every way that you can get somewhere in a two step path you can get there in a one step path as well
188869
https://www.webqc.org/molecular-weight-of-caco3.html
Printed from Molar Mass, Molecular Weight and Elemental Composition Calculator | | | | | Molar mass of CaCO3 (Calcium carbonate) is 100.0869 g/mol Convert between CaCO3 weight and moles | Compound | Moles | Weight, g | --- | CaCO3 | | | Elemental composition of CaCO3 | Element | Symbol | Atomic weight | Atoms | Mass percent | --- --- | Calcium | Ca | 40.078 | 1 | 40.0432 | | Carbon | C | 12.0107 | 1 | 12.0003 | | Oxygen | O | 15.9994 | 3 | 47.9565 | | Computing molar mass step by step | | First, compute the number of each atom in CaCO3: Ca: 1, C: 1, O: 3 Then, lookup atomic weights for each element in periodic table: Ca: 40.078, C: 12.0107, O: 15.9994 Now, compute the sum of products of number of atoms to the atomic weight:Molar mass (CaCO3) = ∑ Counti Weighti = Count(Ca) Weight(Ca) + Count(C) Weight(C) + Count(O) Weight(O) = 1 40.078 + 1 12.0107 + 3 15.9994 = 100.0869 g/mol | | Mass Percent Composition | Atomic Percent Composition | --- | | Ca Calcium (40.04%) C Carbon (12.00%) O Oxygen (47.96%) | Ca Calcium (20.00%) C Carbon (20.00%) O Oxygen (60.00%) | | Mass Percent Composition | | Ca Calcium (40.04%) C Carbon (12.00%) O Oxygen (47.96%) | | Atomic Percent Composition | | Ca Calcium (20.00%) C Carbon (20.00%) O Oxygen (60.00%) | | Chemical structure | | Lewis structure | | 3D molecular structure | | Appearance | | Calcium carbonate appears as a white, odorless powder or solid with a crystalline structure. | Sample reactions for CaCO3 | Equation | Reaction type | --- | | CaCO3 = CaO + CO2 | decomposition | | CaCO3 + H3PO4 = Ca3(PO4)2 + H2CO3 | double replacement | | CaCO3 + HCl = CaCl2 + H2CO3 | double replacement | | CaCO3 + H2SO4 = CaSO4 + H2CO3 | double replacement | | CaCO3 + H2O + CO2 = Ca(HCO3)2 | synthesis | Related compounds | Formula | Compound name | --- | | CaC2O4 | Calcium oxalate | | Related | | Oxidation state calculator | | Compound properties | | | Computing molar mass (molar weight)To calculate molar mass of a chemical compound enter its formula and click 'Compute'. In chemical formula you may use: Any chemical element. Capitalize the first letter in chemical symbol and use lower case for the remaining letters: Ca, Fe, Mg, Mn, S, O, H, C, N, Na, K, Cl, Al. Functional groups: D, T, Ph, Me, Et, Bu, AcAc, For, Tos, Bz, TMS, tBu, Bzl, Bn, Dmg parenthesis () or brackets []. Common compound names. Examples of molar mass computations: NaCl, Ca(OH)2, K4[Fe(CN)6], CuSO45H2O, nitric acid, potassium permanganate, ethanol, fructose, caffeine, water. Molar mass calculator also displays common compound name, Hill formula, elemental composition, mass percent composition, atomic percent compositions and allows to convert from weight to number of moles and vice versa.Computing molecular weight (molecular mass)To calculate molecular weight of a chemical compound enter it's formula, specify its isotope mass number after each element in square brackets. Examples of molecular weight computations: CO2, SO2.Definitions Molecular mass (molecular weight) is the mass of one molecule of a substance and is expressed in the unified atomic mass units (u). (1 u is equal to 1/12 the mass of one atom of carbon-12) Molar mass (molar weight) is the mass of one mole of a substance and is expressed in g/mol. Mole is a standard scientific unit for measuring large quantities of very small entities such as atoms and molecules. One mole contains exactly 6.022 ×1023 particles (Avogadro's number) Steps to calculate molar mass 1. Identify the compound: write down the chemical formula of the compound. For example, water is H2O, meaning it contains two hydrogen atoms and one oxygen atom. 2. Find atomic masses: look up the atomic masses of each element present in the compound. The atomic mass is usually found on the periodic table and is given in atomic mass units (amu). 3. Calculate molar mass of each element: multiply the atomic mass of each element by the number of atoms of that element in the compound. 4. Add them together: add the results from step 3 to get the total molar mass of the compound. Example: calculating molar mass Let's calculate the molar mass of carbon dioxide (CO2): Carbon (C) has an atomic mass of about 12.01 amu. Oxygen (O) has an atomic mass of about 16.00 amu. CO2 has one carbon atom and two oxygen atoms. The molar mass of carbon dioxide is 12.01 + (2 × 16.00) = 44.01 g/mol. Lesson on computing molar massPractice what you learned: Practice calculating molar mass Weights of atoms and isotopes are from NIST article. Related: Molecular weights of amino acids | | molecular weights calculated today | Please let us know how we can improve this web app. | | | Chemistry tools | | Gas laws | | Unit converters | | Periodic table | | Chemical forum | | Constants | | Symmetry | | Contribute | | Contact us | | Choose languageDeutschEnglishEspañolFrançaisItalianoNederlandsPolskiPortuguêsРусский中文日本語한국어 | | How to cite? | Menu Balance Molar mass Gas laws Units Chemistry tools Periodic table Chemical forum Symmetry Constants Contribute Contact us How to cite? Choose languageDeutschEnglishEspañolFrançaisItalianoNederlandsPolskiPortuguêsРусский中文日本語한국어 WebQC is a web application with a mission to provide best-in-class chemistry tools and information to chemists and students. By using this website, you signify your acceptance of Terms and Conditions and Privacy Policy.Do Not Sell My Personal Information © 2025 webqc.org All rights reserved
188870
https://scienceandjoe.com/2020/06/21/gas-stoichiometry-part-1-stp/
Skip to content Gas Stoichiometry: STP When it comes to working out the mass, amount or volume of a gaseous substance in isolation or in a reaction, you have to consider the properties of temperature and pressure. The first part of this tutorial on gas stoichiometry will focus on calculating the volume (in dm3 or litres), amount in moles and mass in grams of gaseous substances in isolation or within a reaction at standard temperature and pressure (STP). We will look at some examples of questions and we will breakdown how to work out and answer each question step-by-step. What is STP? STP describes the conditions of an environment in which 1 mole of a gas occupies a volume of 22.4 dm3. or 22.4 L. However, the temperature of the environment must be 0oC (273.15 K) and it must have a pressure of 1 atmosphere (101.325 kPa). Note:. Please check with your teacher, lecturer or tutor which unit you need to use for your course to express volume (dm3 or litres). Let’s look at some questions. Calculating the Volume of Moles of a Gas using the STP Method These are the equations you will need – depending on which unit you are going to use to express the volume of the gas. Question: What is the volume in dm3 of 4.10 moles of NO2 at STP? Working: We can do this in two ways – with a conversion factor to cancel out units we don’t want and without a conversion factor. Note: Include the same number of significant figures in your final answer as there are in the value in the question with the fewest significant figures. We take the value in moles and multiply it by the known volume of 1 mole of a gas at STP which is 22.4 dm3. We can then divide by 1 mole of nitrogen dioxide (the equivalent value as it’s based on the volume of 1 mole of a gas at STP) to allow us to cancel out the unit of mol NO2 and leave us with the unit we want for our answer which is dm3 NO2. This conversion factor method is required for some specifications but not for others. Please check with your teacher, tutor or lecturer if this method is required for your course. Answer: 91.8 dm3 NO2 (to 3 significant figures) Let’s look at a question involving a reaction: Question: What volume of ammonia is produced if 7.80 dm3 of hydrogen gas reacts with excess nitrogen at STP? This kind of question requires a few steps. Workings: 1) Write a balanced chemical equation that represents the reaction. 2) Convert the volume of hydrogen into amount in moles at STP. This can be done with the following equation: This can be done with the use of a conversion factor and without a conversion factor. Note: The amount in moles has been given to four significant figures because this is not the final answer and we want to avoid rounding to too few significant figures in the middle of the calculation as this may lead to inaccuracy in your final answer. I would advise going for four significant figures for amount in moles if it’s not the final answer to the question. 3) Work out the stoichiometric ratio (mole ratio) for the two substances in the question – which is hydrogen and ammonia. Reminder: The stoichiometric ratio – also known as the mole ratio – is a ratio that represents the number of moles of one substance to the number of moles of another substance. This could show how many moles of one particular substance is produced with a certain number of moles of another substance as a reactant or the number of moles of one reactant that reacts with a certain number of moles of another reactant. According to the balanced chemical equation, 3 moles of hydrogen reacts to produce 2 moles of ammonia – therefore the mole ratio is 3 moles of hydrogen to 2 moles of ammonia. 4) Use the mole ratio (stoichiometric ratio) to convert between the amount in moles of hydrogen gas to the amount in moles of ammonia gas. We should cancel out the units we don’t want for our answer. We can do this by multiplying by a conversion factor (a fraction) that includes the number of moles of the gas in the mole ratio we want to convert from as the denominator and the number of moles of the gas in the mole ratio we want to convert to as the numerator. This then leads to our answer which we must give in the correct units of mol NH3. 5) Convert from moles of ammonia to the volume of ammonia produced. This can be done with a conversion factor to cancel out the units we don’t want for our answer: Alternatively, it can be done without a conversion factor: Answer: 5.20 dm3 NH3 (to 3 significant figures). Calculating the Volume of a Mass of a Gas using the STP Method You’ll need two equations for two separate steps. The first step involves converting from mass in grams to amount in moles. Reminder: Molar mass is the mass of one mole of a substance and is the sum of the relative atomic masses of the present atoms. The second step involves converting from amount in moles to volume. Question: What is the volume in litres (L) of 34.0g of CH4 at STP? Working: 1) Firstly, convert the mass of methane into amount in moles. This can be done in two ways – one involving a conversion factor to cancel out the unit of g CH4 : Reminder: 1 mol CH4 is an equivalent value for 16.04g CH4 in the conversion factor because the molar mass is the mass of one mole of the gas. Alternatively, a non-conversion factor method can be used: I would advise going for four significant figures for amount in moles if it’s not the final answer to the question. 2) Next we find the volume of methane according to the number of moles we have by multiplying by 22.4 L. This can also be done with a conversion factor: Alternatively, a non-conversion factor can be used: Answer: 47.5 L CH4 (3 significant figures) Let’s look at a question involving a reaction: Question: What is the volume in dm3 of hydrogen fluoride gas (HF) produced from 98.0g of fluorine gas reacting with excess hydrogen gas? Working: Let’s look at this step-by-step once again. 1) Write out a balanced chemical equation that represents the reaction. (You may have to write it yourself in a test or exam.) 2) Convert the given mass of hydrogen fluoride (HF) into amount in moles. 3) Work out from the balanced equation what the stoichiometric ratio (mole ratio) is for fluorine to hydrogen fluoride. According to the balanced equation 1 mole of fluorine gas reacts during the production of 2 moles of hydrogen fluoride so the mole ratio is 1 mole of fluorine to 2 moles of hydrogen fluoride. 4) Use the stoichiometric ratio (mole ratio) to convert the amount in moles of hydrogen fluoride into amount in moles of fluorine gas. I would advise going for four significant figures for amount in moles if it’s not the final answer to the question. 5) Convert from the amount in moles of fluorine gas to volume in dm3. Answer: 54.9 dm3 (to 3 significant figures). Calculating the Amount in Moles of a Gas using Volume and the STP Method Some equations you’ll need: We’ve carried out the necessary steps for questions already, however it’s important to know when to use certain steps and when not to use certain steps in particular questions. Question: How many moles are present in 6.80 dm3 of SO3 at STP? Working: Answer: 0.304 mol SO3 (3 significant figures) Remember: The final answer to this question is an amount in moles and therefore it should have the same number of significant figures as there are in the value in the question with the fewest significant figures. Let’s look at a reaction: Question: How many moles of iodine react with excess hydrogen to produce 9.85 L of hydrogen iodide gas? This is another multiple step process but let’s break it down: 1) Write out a balanced chemical equation that represents the reaction (if it’s not already provided). 2) Convert the volume of hydrogen iodide gas into amount in moles. I would advise going for four significant figures for amount in moles if it’s not the final answer to the question. 3) Determine the stoichiometric ratio (mole ratio) for hydrogen gas to hydrogen iodide using the balanced chemical equation. According to the balanced chemical equation, the mole ratio is 1 mole of hydrogen gas to 2 moles of hydrogen iodide. 4) Use the stoichiometric ratio (mole ratio) to convert from moles of hydrogen iodide to moles of hydrogen gas. Answer: 0.220 mol H2 (3 significant figures) Calculating the Mass of a Gas using the STP Method You’ll need two different sets of equations: One to convert volume into amount in moles: A second to convert the amount in moles into mass in grams: Question: What is the mass in grams of 52.4 dm3 of oxygen at STP? This is a two-step process: 1) Convert the volume of oxygen into amount in moles. 2) Calculate the mass in grams of oxygen using the amount in moles. Answer: 74.8 g O2 (to 3 significant figures) Question: How many grams of propane (C3H8) react with excess oxygen (O2) to produce 54.7 L of carbon dioxide (CO2) and water (H2O) as a by-product at STP? Note: This is the kind of question that’s designed to test to see if you can apply the correct steps and involve the correct gaseous substances in the calculation. That’s why it’s very important to read the question carefully and not include the wrong substances in the calculation. We are given the volume of carbon dioxide as a product and we are asked to work out the mass in grams of propane which acts as a reactant. These are the substances involved in the reaction we need to focus on. Let’s break this down step-by-step and please check each step as you progress through the calculation: Working: 1) Write out the balanced chemical equation that represents the reaction. This is the only step in which you have to consider oxygen and water. The rest of the calculation focuses only on propane and carbon dioxide. 2) Convert the volume in litres of carbon dioxide into amount in moles. (Ensure you use the correct units.) 3) Determine the stoichiometric ratio (mole ratio) for propane gas to carbon dioxide using the balanced chemical equation. According to the balanced chemical equation the mole ratio is 1 mole of propane to 3 moles of carbon dioxide. 4) Use the stoichiometric ratio (mole ratio) to convert from moles of carbon dioxide to moles of propane. Ensure you write out this step correctly to cancel out the correct units. Reminder: The numerator in the fraction you are multiplying by is the substance you want to convert to and the denominator is the substance you are converting from. 5) Calculate the mass in grams of propane acting as a reactant by using the amount in moles. Answer: 35.9 g C3H8 (to 3 significant figures). So, we’ve covered questions in which we’re asked about gases at standard temperature and pressure and we know we’re working with a constant volume for 1 mole of a gas at 22.4 dm3 or 22.4 L. The steps you’ll have to perform varies depending on the question – but as long as the gas is STP the aforementioned constant will apply. But what about when the temperature and pressure are not in accordance to STP? In the next part of this tutorial we will be examining how to quantify the masses, amounts in moles and volumes of gases at room temperature and pressure (RTP). Share this: Click to share on X (Opens in new window) X Click to share on Facebook (Opens in new window) Facebook Like Loading... Related Gas Stoichiometry: RTPIn "Chemistry Tutorials" Gas Stoichiometry: The Ideal Gas LawIn "Chemistry Tutorials" Determining the Masses of Reactants and Products of a Chemical Reaction (with Conversion Factors)In "Chemistry Tutorials" Subscribe Subscribed Science and Joe Already have a WordPress.com account? Log in now. Science and Joe Subscribe Subscribed Sign up Log in Copy shortlink Report this content View post in Reader Manage subscriptions Collapse this bar
188871
https://www.nber.org/system/files/working_papers/w5317/w5317.pdf
f1II CLcq iIJCJnqpJ © UO1CCe ACIJ ro W OflLCC o UO o ccccq rtio bI2bp umA pc dnocq rporn cthjcu bcum22lou bLoAIqcq wt © 1ÔÔ P O!" N4V NJ!C-ECUCU! uq M0fl4C1 JOflpTUI fl LJU2 LccLAcq 2POL 2CCIOIJ CLOCCOUOUJ!C2 o, 1lBE1 LCCLC bLow2 lu npjc ECOUOWC vnq pCUou9J hUUCC uq uviouj N0UCW1.? flUq iP MI1°'1 Bf1I.Cn °L ECOUOUJIC KCCLCIJ JJJI2 bvbcL pcjbpj qcnou Jpc obuoua cxbiccq i.cucc ipo2c o w ff(IflJOL2 !IJq uo o w ffiC 9LC LC(IIJ 10 JCJJVCJ DCACWflX EUUdnC uqo 2C110 KCPCI0 uq H0CJI GC OL OClOP. ïÔÔ? NV on !O?O NV CPflC1R VACUfIC MVJJOMVr' BflKEVfl Oh ECOMONIC IE2EVKCH OLpIJ bCL MOIIIJCJ JOflpIUT O!' NU NHCI-LCIJCU! VIkD MOKNVJJAE VMVFA2I2 V]4D COM2I1NhLIOM .LVXE2: b'JJJJE OKORU}l EhhEGL2 Oh JMCONE I4BEI fiOBKIMQ bVbEK 2EKIE uq 4BE1 MC!A AOLJC MA TOO I-J !Q IJ!UOU ()4f 44 JC 4W P.CC1 OO JW 2P MM I4 A0LIC flWACL2i pucuJpouj AouGWLA jnuq cw cpooj o BflUG KCCiCP DcbaIwcIu DcbI.Lwcu ocouowc !EC -g3 O!U NJJCJ-ECLLGW I4OIIUC! jOnpirn !wbJIcnou OL ipc obwrnq !iucI.1cwboLvJ cpocc o wx IIJ2ILflwCIU2 jpoL 2nbbJ? vuq ipci.coLc ou tpc thcc!1jcou o qc jcm.c cA!A Jyc bvbci wjo qcuAc i.cqncu jpc qjcci o cou2nwbl!ou wxvou ou o,ip qcbcuq cwcjj? ou w cv2pc!4X o JONII W !U CIJCL91 Wc XOU O VC1OL !ucowca (pnuru uq bpAaTc9J cbwj) LoMqJ-qcLcut CJJ9IIIJCJ2 11JLOrJJJ MIJ!CJJ WG WXC ØIJCC CCOUOUJ!C LOIAqJ jê !U Ip!Cp w o#Arp bwcc p? w ccnwnjpou o,pnurnu iiq bpX!c!j cb!wJ JuG JuG qcc oj wcouJc uq couanwbI!ou 19x1ou cwumcq !'J w couixi o woqq YDaLKYCI VIiD MOKNVJJAE VI4VfA2I2 VI4D COM2fl1'JhLIOW iVXE2: WUJJI OIcO11.LH ELEEGJ2 Oh B4COY'IE ØCOGL L4BEK oqqii JVbCL 3L1 VIOITDUUO$IThI I IIfl33Ibini no1 s &f noiix 9mo3ni Wi3V noiiqmuno3 b ih3m 3Viis18 8rIi no 3iEd3b &!T amei2 xBi in3nu3 moft iIirI2 z lo ihm 3viiE1i 3clT .(11) a3ddoH oi IaBsI i I3Bd niiab sioirf iEd3b 3f1oq 10 18in83 srli i o smiibn3qx3 n no bzncf no o nOiiEXEi 3ff10311i 1sno13q no bEd nimuI3 bluow xEi ewiibn3qxe na anenoqotq ati ot gniblo33A .i&tw1e bn U erti ni mio1 xEi no lo noitExEt elduob" S flWOfl) 2XSi 3ffl03fl1 no b2Bd m8i2 fi flu ifl8I3dfli 2gfliV i&IIgB 2BLd rIi nivi1 5wiJñ niii wilt noiin1umu33E EBiiqE3 3fW03fl3 bluow id irLi gnitinimit3 ."niv bilE moorn lo 2noiis3uIqmi 8IB113w bilE ritwoig di 3lolqxs ot gnuiai3ini siol8isth i tI .bisbnai .rbwoig niri-gnol bilE nouiIumhJ33E bui ot bu 2bbom blEbnBi2 3rli iii xl3t (noiiqmuano3) iuiibn3qx3 noiiEx1 noiiqmuwo3 lo 23113 silt lo 2i2lEflE svit&rnon bilE sviti2oq 6 2inss1q isqsq 2irIT nevhb i rftwoig 3imonos 2Lsbom serlt nI .iltwoig 2uonegobns lo 2lsborn ni noutExEt smo3nu 01 svitEISI silt ni Isiuqs3 nEmufi lo bilE iots no!i3IJbo1q sf11 iii IsnqE3 Is3i2'flq 10 floutBEumuJ33E 2tnsgE stEvnq d 3imono3s niri-gnol lo but2 silt iol stBnqoIqqB l1Elu3it1Eq i towsmi1 2irfT flo!1631Jb3 23mua2E Ii sauB3sd esui noifljdiii2ib lEnoitElsnsg-Elini bilE -isini esibbE ivewoif t0flfiE3 ii ;'3n8i3ifl3 .nothorf stinilni iiE rltiw "tasnb" io tnsgs svitEtfls2siqsl 6 lo 83fl8tiX3 Silt sxEt emoni lo 2133113 sill bsisbi2no3 ew (Ql) iniduoSl bilE itlsrl&I-i2sliM) how uoivsiq nI nzmud) lodEl ritod zlsbom rltwcng 2uonsgobns 1o 22613 sbiw B ni 16111 bsworl2 sW .2Isbom rfiwoig iii £ siomisrltnib ;2t35115 iliwoig 3VitEgsfl sVSff 3xBl smo3ni lBtfqB3 EB3i2flq briE sx6i emo3ni (thliq13 silt ot 2iniEi12no3 lo sans2dB silt iii iBflJ silqmi 2isfEn6 noitExEt IEmitqo 1ioqmsnsini lEmiol su smo3ni IstiqB3 bns iodsl iliod no slwi xfii nirx-gnol Ismiiqo sift woiiod ol iilids insmnisvog io5I bun il1sijnM 2snoI d bsninido swill 21sbom rtlwoig lo 22613 isbiw £ ot si1isnsg aIIu2si s2srlT silt o1 2noits3ilqmi silT .smit wi n bsnilsb i siuisl sisriw ss sf11 iol (I) 11u8 bnJ3 (cJ cfQQI) IsnoilibEil srIi moi1 isThb 2lsbom ritwoig wonsgobns sasdll lln ni 2smo3ni iot3El lo noitExEl lsmitqo noutExEt sviii2oq bnn smoani IsliqEa lo ffoitx6l ois niri-gnol lo 'tiLsmi1qo sth tuodE ilues bbut-s1mnrID .2Isbom fliwOig 2IJOfl8OX8 1E3i22E!305n ft smoani iodnl lo U%0 xto%\o bn oc11 oi noiiibbE ni xt noiiqmuanoa 2iaLns srli ni s3ubOflrn 8W 3qEq in33tq 3th nI fIJ o1 2noii3itqrni 3üO2 svn3b bn thwoi imono no i3fiqmi iisiii 1ebino3 3xsi 3f1103fl1 Iiiqs3 2bbom ni a3xi noiiqm1Jno3 lo 2i33113 8viiioq &IT .noiiqmu2no3 bn smo3ni loi3sI 10 noiixi Ismiiqo (!Q1) ol3de$1 nibu13ni no!iudnino3 n&ei lo i3dmun i ni bsibuia nd 3v&f wonobn3 10 rIi nino fldw io1 .(Q1) o!3d3$1 bns !oi bnE QQI) 3voJ bns xu813v30 (€!) oniiooq enot lo 3ofli 3luiEl3flt 3rIi hi ai1us Eno srfi emoni J2V noiiqmli2no3 lo noiisxsi lsmiiqo bn Iiiq 1odE1 no) xn Imhqo IEB i&Ii wod oriw (QQI) 11u8 briE (dEQI) io$I bns ilisuasM .nin gnol erli ni oi o iz 3d b1uor1 (noiqmuno3 iiiI .noiieiib lo l3dmun ni noiiudi1ino3 uoivo1q e± lo 2iLur 3rfi 3ilE13n w 3ieH iol brim noii3nul 31u2i81 srli lo noiim3iieq ins11ib iol awollm imth qu13 1m13rIs 80ffl £ 18bino3 ew n81eThb 13bino3 ew 11Rzfli3eq .noftmup8 noiim!umu33m lEiiqSQ rimmurl 8th lo no!im3ihz)eq 9viimrn9ils 3w 13voeIoM .noii3ubolq 3morl brim 3mii 'ii1miip mui wmi as r13u2 1iviim 8iui3f oth lo noiimIijmio1 1no rbirfw ni 3no noiimup3 noflmlumu33m Isilqs3 nmmurl c1i iol noiis3fli3eq eviisniEB ow bino3 31E iuqni Lmiiqma 1s3irIq serIw isrlio 8th brim 1siqm3 nmmur! lo noii3ubolq 311i lii r3n auqni iodm! iliwoig no noiimxmi lo a11s sth won s lmns ew bno .noibnul noibubcnq ni31J ni ziuqni Isnoiibbm xmi ai brim iois noiiBIumu33E Imiiqma nmmunl sib lo siuiouii sib stuis1 lo noiim3fli33q sib no bnsqsb loi3mI brim nobqmuno lo noiimxmi Immiiqo sib lo misEmnm 3iimm3r m insmsq SW biiniT .insrnimsfl .gnu1sb!no3 31E SW isib mbbom rbwoig wonsgobns 1o sasb sviimnisilm srii iii 28ffl03fl1 sviimlei sib no simdsb sift lo S3VIIJ m nssq noii3e woI1o1 as i sqsq sib 1o s1ui3ir1 silT nois brim jsbom sift a1n3sq noii3s .noiiExEl smo3ni odmE brim Emiiqmo waisV noiiqmu2no3 lo aWism sib no sxmi instsThb lo zis11s sib lo imImnm svbimoq silT .muhdiliups sviixsqmo3 sib iol svlo 4 noimxmi Ismiiqo sib lo mIus sib atnssiq ô noii3s ; fl0fl332 flu bsinsmsiq i \fflOfl033 sib 1o sis thwog .asbubno3 8 noiis brim 13sqas lm3niqms brim sviimiilnmup suaib ' nois .i1mnm ii' t oX' 1S1UTA$I3TIJ 3HT IO Y3V5IU . beiaturniol aaw xai rnoni vo xai noi1qmuno3 lo 1hoi13qu &ii o1 inemugIE 'Iis nA 2iurft di nriaqa bna 43um dieiuodal" ouiw teiow a no nsfrwd r1i b3uqmoo 311 .(I1) 23ddoH d bna fliM ttaui ndo1 'ig u1 ha di3bn3q bna 31bi cIi3viI" orlw taubivibni na lo lath ubiw 'oda1 2i11 10 01 viia1ei a3xa1 noiiqmu2no3 lo ioval iii in3mu1a b3iui3q 3varl (Q!) obIB)I 3mil inr iom flu fIB hI 113!Iqfrn nva lo noiisxsl 3Iduob 3111 1111w aaw m1u3insq hi fl183fl03 a'lIiM '.axai emo3ni woul ion ei ienam bluoula Isuiw ievewoH .abiova xai noiiqmuano3 a iarli noiisxsi elduob a xsi emorn lo uioiiaxsl ieivaeu1 a oi abael xai 8m03ffl nA .boxsi ai eno Iivaeu1 woil lefLin hid bsxsi ai eno neflo ai isub xsi noiiqmuano3 ha 1eaievnoJ .noihqmuano3 ifleflhi3 01 viia1ei noiiqrnhiano3 (eiuflit) b3lleIeb 33r0r13 efli eioleieilT .no!iqmuano3 ouiiñ baa in3nu3 no nebiud 8ffl52 erIi aeaoqmi 8m11 evO mo1inu lo aelsi Iamiiqo erli isvo noiiaeup a as baaeqxe od 1153 noiiaxai 3ffl03fl1 baa noiiqmuano3 uteewied aarl noiiaeIJp 21(11 rf3ao1qqa e3nanhI 3iIduq lsnoiiibsTh a nI .noiiqmuano3 eiuiul baa inezeq lo noihaxal ni ainioq ineiellib ha enszisl baa noiiqmuano3 lo 1ifidaiuiiiadua sviia1s e11i lo amei ni bes1sna need xai bluorla eiio hath Iqmi a3lqi3nflq noilaxal Ismilqo bisbnaie ioi3s1 bxainu as ai euete1 33ut1 .emii boirne1 852) eiuaisl (111w eldsfljiiiadua aael io\bna (1aineme!qm03 80ffi em isub aboo '1ivaef1 810m xsi noiiqmlJano3 miolinu a 83fl12 isdi bevieado ed oals bluofla hI .((8QI) sA baa (1) rnoi (OQQI) 8lu2iet Oh 21 813th nerlw .8.1 2UOfl3OX8 ai 1qqua iodal 8(11 eieulw qulea a ni xai aw a oh inelaviupe 21 noiiqmuano3 £ 1o ii1smuiqo evihalei eth lo eno as b32sIrIqel 3d fl53 31J22i 8(11 ((OQQ!) boime1 eea) ionequa ed 111w noilaxal noiiqmuano3 miolinu 1siene a! .xai emo3ni Isliqa3 5 oh beisqmo3 xal (egaw) 83n3iel5Iq baa eiuaie1 baa noiiqmuano3 n8$wi3d eldaisqea 21 noii3rnñ iiliiu eth Ii noiisxal 8fflO3flI oh 2QI) niei d b8aau3aib as tevewoH .((Q8QI) a)I) aoisb ineieThb Is noiiqmuano3 ievo 3ibeIflomofI em hluzei evoda eth 8Q!) 1eflflDl baa lIoAihio)I 43stheuA baa (08Q1) omba baa noanhhlA (O8Q1) ni)I alnea Ii io\bna eldaiaqsa Ion 315 iijiel baa nouiqmuano3 neriw nwob 2)(aeid baa 1aiene 'IeV Ion ai .iBdb eiift lo vtu 1holeit1 boos £ iol (Q8QI) N t .boh3q sno nsrli iom iow noiiqmuano3 lo 1iEamiiqo sviiE!31 8th lo noiisup 8th b81bbE a&L utsi3ii1 nei tom A 2gniv 8th 1311E noLtExBi morn Eiiqa sisriw 1ebom r1iwoi lo x3ino3 eth iii floiiBxEi 3fflOOfll IJ18V lo noiisxi &li ifith bsii (8VQ!) nbl2oa bn (8VQ!) ni3ithl&I .monos sth lo sisi insrnisvfri bris 3rii 31o1813r1i bns oiii 1odE1-1iiq3 mit noI sill tiiqE3 lo noiiBIIJrnuo3s srli thIls bluow 3mo3ni thiiqB3 sill lo )!3oi 1flqs3 eu no sxs.i lo sq'i instsThb lo a138118 seeriT .smo3ni lo !svs! £iiqs3-18q rnn-no1 3miisIiI lo noirioiaib lsioqmstisini 8rIi nsth misi sisliew ni instIoqrni siom ed 01 b8IJgiB 818W 'fflOflO38 nieibI&I LEaflisq .8ltflsieiiI &nsniI 3ilduq iisi !snoiiibsii eth d b asii noibsb noiiqmUnO3 5th luods anoiiqmus 8Ldin8 nsvi s1svisz)x8 bsxsi asw 8fflO3ffl 1s11q53 isth inioq 8th 8bsm (8VQI) anis 3r13f3fl13 sgmI isili bswoi1 sH .sisi teisini xsi isfis sth ol teqasi thiw nivaa lo ii3iiasI8 niA .XSI 8ffl03f11 iodsl £ thiw ii ni3s1q8i bris xsi srno3ni lsflqs3 8th niisnimi1s d benisido 3d bluow .lsmiiqo sd bluow xi noiiqmuano3 s isth bsilqmi aia'1sns ani8iabI&I isth bsuis nsrli (O81) Esmiiqo ns grtnsbiano3 d nDlaoH bns nisiab1&1 10 noiliulni sill bssilsrnioI (18Q1) aismmu lo 1!3!iaEIs Isioqmsrisini eth li isth beuis eH .anoiabeb effivsa aIJoctsgObne thiw lebom i1iwoi sviiianea (I8V ed Iliw elsi fliVfi ecu gnoI (i8V at inss eth 1o nosnOd emil eth bns f1ir( ai noiluliiadua eth nO 133113 flOi1a 13V 13 eVSII bliiow asxsi emorn lsnqs3 flu egnsrf3 £ bns flhIJiei Issi au ol anoulsiumia tE3nemiJM .\mono3s srli ni oiisi iodsl-lsiiqs3 mit gnol sth bns Isuuqs3 lo noilsttJmu33s 81 £ ol bssl bluow aexsi nouiqmuano3 riiiw aexsi omom nu3sfqei isth bsiaeua aiernmu d bsuneasiq eth ni easeini egisi £ oi abssl amvsa lo ii3ii2s18 lasielnu rIiff sill as 8ffl03ffl elsia bssia ni sassi3riu .OuiSi iodsl-lsnqa3 nui-gnol gnisimiiqo £ 111 suaai aithIo aia1sns noiisxsi 3imsnb Lsmiiqo ns beinsaeiq (81) 'e1msrID ixei4 bn xsi 8fflO3fli lsnqs3 a sia eldsliavs aineminiani xsi ecu neriw iath bswoda bns Isborn noiiori eiunilnu lodal iol evunaoq ai ii elurfw oies ai 9fffO3fll lBiiqE3 no xsi flirt snot lsmuiqo eth xs1 omo3fli iodsl xV t m Muoi .xi noiiqrrni&1o3 B d beBIq3 i xi moni odE1 £ neriw b8niEido al iIuai mBa 3fIT mo3ni 13nnbI bnB uIoIiLioA jbBdeIJA ea) a1bom muhdiEiup3 thieneg OJO mofi ai!uaei noiis1umi aiiiened erli bemiuInO3 ((V8Q!)1toliIio)I bns r!3BcJieiA bnB (8I) 'eIIBrIW bilE nevorI nofle11uI 8!) bewoila aeibuia &IT .(xBi emoni lodEl B io) XEI no!iqmuano3 £ 01 xBi 8fflO3fl! fiB moil gnirI3nwa lo aJnioq egslne3ieq owl lo iebio ecu 1o anisg sisllew BievE 01 biet bluow miolei 3!Ioq xsi B fl3ua IBEII noilB!umu33B Isiiq& bris agniva lo eanoqaei gnofla ecu moi1 beviieb al3eTte 3111 aeas Its nI .emoni lo .lsiiq& ol niuiei len ecu ni egnBcb B ol need a&f rfiwoig zümonoe bns 3i1oq xi neewied nOii3BleiflI eñi lo buia eth 1ins3ei eioM thwoig lo elBi ecu fI3lclw ni abbom aeau lull vioerfi diwoig "wen" sift lo inemqolevsb eth d beiBlumila diod buia ol alebom oaecli beau svscl aiocliuB lsievs .Iauonegobns beriiccnsisb 2i \ff1OflO38 sill lo miA (OQQ 1) oleds5l bilE gni)I (OQ1) aEauJ d ai3qE .3iIoq xni lo aleqaE eviiBmlon bns evilIaoq (1) oleds5l bns sAol bilE (41) evoJ bns xueievsU Q1) oho (EEQI) Is Is aeno1 (Q1) iol aB fbua amio1ei xBl lo al3eTle smllsw bnB cliwoig s1ilnB1Jp ol iebio ni anoihimia S2IJ aisillo gnom.s sth clguoclllA .aexsi 8ffl03ffl tsliqB3 lo gniiswol B iO aexEl no!iqmuano3 ol emO3nl moil tlirla B slqmExe luo lnioq Its ecli 1dEisbi2noa isThb asibuia sascll 'd beftulnebi a1331Ie siBlbw bilE fliWoig 8VilElflfIBiJp IEmiiqO .noiIBxsl smoani iodBI bns IsliqEa null anoiliolaib iewsl asaubni nOIlExBi noilqmuanoa IEfIJ nirx-gnol IBmilqo sf1 IEIII aworla (E1) HuH bilE (c1€1) iaao$I bilE illeunEM ,asnot d aia'tsns noiiBxEi no noilanlasi on ai sisili nerlw ois SIB (x1 noilqmuanoa sill griibulani) asxBl 'iEnoiJ1oIaib Its lo aeulsv "emil WE1" S bslsbom ai siuaisl bns anoiaiaeb gnibnsl bnB gniwoiiod lsioqmsrislni a'insmnievOg sit! sf1) alaells sisllsw bilE cliwoig nui-gnol lnsisThb SyBil aexEl smoani bnB noilqmuafioa a15qBq sascli nI gninoqiaoq , lebom iuo ol won rrufl sW (gnoiia aael Isieneg ni gnied asxsl noiiqmuanoa lo aiaelle cliwoig .snsbivs Isaiiiqms bnB agnibnfl svilsItinBup lo noiaauaaib slslqrnoa eiom B V noiiae lilnu .won3go13ld ai noilaluqoq sf1 nsrlw eiIusi sImacD 10 sniwdoi sill bsnrrilnoo (V81 8Q 1) bbut ti 0%1' kH01 J300M 3HT . ;(IBiiqs3 1siilq bilE) boo IEni1 e3ubo1q ioi3 infl &fF .monoe ioia-ssitIi biao 3W 11S3 3EfIi \3iVii3E 81U2131 £ -- boos iiBm-non £ s3ubrnq biith orli bn IE3!qE3 nEmurl 2$3ubrnq bnoo 3d1 Y&nui 1iIsup" io "noii3ubolq 3rnorl" lo mo1 rli e3 OOY .L .1 ifEmufi u ifidi 'oIonr133i ($1D) ol aniuix lnEiano3 fiuiw buboiq i luquiio i3ii :miol LuoU-ddoD 3fli Ai ol bemu i oIonrkei &IT .iuqni a A IBuiq EE3iIiq brIE 'tk EEiiqE3 (I) '(%çA)K= ( fEuiqE3 srIT .boog lo noii3IJboIq 3r13 ol b3iov3b 1Euiq (nEmull) IB3ir1q 10 noiizEll 3(13 i () 'i 3er1w . elm 3(11 l 3isi)eqeb 01 bsmuas i rliod ew lErli o1onrbei 51D £ illiw bs3ubolq i II iivi3 leisrn-non £ i Isiiqm nfimuH 1!3i1qmi iol thupe ô elm £ iE 2eiEie1qeb II .(1Q!) oledeSi ni iuqni 1suiq th3i11q briE nErnurl algiioU-ddoD ed 01 bemus i noiinin noil3uboq silT .1suiqE3 1E3ir1q lo noilzi35Iqeb lo elm sf13 03 o:LIsw ai () — '() = nI .thliqs3 asmurl lo noilsIumu33s ecu 01 beloveb Isuiqs3 (nsmud) 1E3ir1q lo noi1m1 erli i () bna (fQQI) thhM-i-aI bn ngilluM (1QQ1) o13d351 ol mlimie ei iob owl itfl fli 1o noiiB3fliq 3rfF .(€Q1) oina bnB M11dB3 F .4 noib iiVii3R thsm £ i laliqao namwl doifiw ni ao d3 lo noiuib £ ioflE ion 3ob lud lni diwo di ml noiluloa nno1-bo1o 51qcni £10 noiiBvh3b 3113 ewolIB noiiqmueaa airiT .e11ijei &1i lo ituin 3viiEiiIaup 3113 332 .eviinvh3b-eeo1o 3viiieoq ilhiw 2$ID rn 33io1ondo3l z1l doifiw ni eo 3113 ol xi1gI3n ei1uai wO .(dQQI) ieeo5l huE itI3unEM ariot bnE (IQQI) o!3d351 V xV ocO tw o% ci z Ii :lfisnil si ' io1ornbsi smii-ni-inioq't sill 1Mb bsmu2 11i3!lqm! svd sw () bn (1) 2fIoiiUp3 .'A i "1iiq3 sviie11&' srli boo Esnil lo noii3uboq sib ni beoLqme 21 boi2 1511qs3 sili lo noI13I1 IS3i2flq iii 5ID 31E sisrft icfi 21 noilqmii2a 15131113 5(13 :1lusi uio iol ifi22338 ion 21 noiiqmu2s 2irlT iots1 sldi3ubolqsl sib 1siiqE3 nsmurl bits t! . .. 1ois1 ni2u siuibnsqxs 3ilduq lo rbsq nsvi 1wonsoxs its sansnil ol 2bssn insmnisvog SilT -xsl sis thnod insmcnsvog Isili smues sw ilsisns lo aol luorIliW .2bnod bits noiisxi smo3ni :d nsvig 21 inernfnsvo sib 'ccl bsssl inisfl2no3 isgbud wosnsinsiani sill' .iqmsxs (€) on 1511211 silT .sunsvs xsi 15301 ei! bits isislni lo slsi tisrl1 21 abnod 1nsmni3vo sis .1 sisdw :d nsvi 21 monos sill lo inisli2rloa 331110231 3(11 bohsq isvs it! .2siIqqs noiiibno3 sms (3) .siuiibnsqxs insminsvo 21 ) bits noiiqmu2no3 sisvnq 2! D sisrlw %t ir . Isnqs3 asmijil sill sisisqo silT .2ins5 3iiaimois ls3iinsbi d bsiidsrlni 21 monos sill' .2miuI 01 lsliqs3 Es3i2rlq bits nsmuil ins, sdi bits () nolisups ni bsdii32sb o1oncbsi noiiElumu33s socpiiq sill ibiw ns2ocl3 sis ls1iqs 1ssiasilq bits usmuil lo noJls3olls sift bits insmi2svni noilqmu2noD :nOil3nuI 1iliiii lsioqmsiislni its niimixsm 10 .e3iolonfb1 mit-ni-ithoq li lo lot sill lii3b iii iom ewoeib (CQQI) nihEM-i-sl bitE nagilluM 8 iV oD t c%' 101 3bubffi bfuo3 ISIJI '1ivii3S 311Ji31't i X bnB 3(1111 31ui31 i \ 33ne13131q 3miI lo 3113! 8(11 313(1w 1nE1noD £ Z3)1131 (10113(1111 'li1i1u atJO3n13insin1 3(11 113(11 3ff1U7.13 3W .noil3ubolq 3mod 3lqmEx3 :mto1 (3ID) noiflflhidu 10 1iila1 l13loqm3rt3lnI -'Cx D) (ô) 1— I = 1go1çt + o13 + o1= , 01 'I131imi i noii3nul 'CiIiiu 3rIT 8.noi1u1i1du2 lo 'li3i1a1313 I13loqm3n3ini 3111 lo 313Vfl1 edI i 313(1w 3d ol nwoll2 need aid rniol 1noi1niñ 1i bn (IQQI) iiIhW bris noiego$I did&1ns8 ni eno 3112 B 3VBII 3W O = t neilW .(88I) .113 13 ni)I d rlisq r11woi be3fli3lBd £ 1o 33n311X3 3111 illiw 1fl3iiflO3 diwo auoneobne 1&im ni E , "3fflui WS1" i 311J13i3I 1131(1W flu flOiI3flUI 'iihfli 3(11 lo noii13lumlol blEbnsle (QQI huH dQ1 aQQI .EB 13 23f10t OI 13B3IJJ) eiuie1 rliiw 2IebOffl nevig noil13Iumu33s 1snqs3 nsmuiil no lnisfl2no3 3(11 ol 1e1du i noilEY.JmixBm '2i1iiu 213ff1u110D :Ini13fl&to3 JegbIJd 213ff1U2lIO3 3(12 01 bflR () 'd (V) 0 < 'A — ''A — (1— ( + I) ) — .k i + (' — 1) 1 + Ac (t — 1) % 13vu133q31 3fflO3fli lodEl bns thliq133 no 3is1 xsl erli bn13 niulel lo 3I131 31113113 bn 1 31311W II .'D + + 02 Isuips 5113 3Ufl3V31 xsl 113101 11s3!D .xsj noi1qmuno3 ii i bn b3o1qm3 1131iq133 nsmud bn13 1s3uf1q no 2(11(1131 3111 1ivii313 i3IBm £ sw nouiliIuJmu3313 lsliqE3 nsmi(1 nsmui(1 lo Inuoms erli ehirlw ngi2 svin2oq s rliiw lnisfl2noo Iegbud 3(11 flu seqqs bluow 1o1s 15111 flu evii13g3n £ ulliw issqq bluow 33F1q svi1s1i ii emu1 ((s) lo H5I 3(11 no m131 121i1 3(11) lnemiaevni Isiiqn3 :noiISofliq€ thn om cr11 b3qoba w ti bn&13nu d bluow i1ui 1UO lo uJiBn viiR1i1Rup &rIT 8 - ? - 1) = ¶1+1 .0 = i iol (Q) oi BoiJb1 rIQiñw O%\O %' oi -cJdoD £ f!iiw Muqni 1nqB3 I83i2rIq bnB n&nuri aij ("noii3ubolq 3moñ") ivii3s oIuzi81 srIT :goIoff(b3i eiIguoU (8) 4'(N)4[A(—— 1)]=X i in3mWobn3 3mii Iubivibni rbBH "no!i3ubolq 8rnorl" oi biov3b 3mii fo noiiñ sth w 8dw :3rio oi besilsmion (Q) I = + + + bn boownD ni e , "noii3ubrnq 3moIt Io miol £ o flI3IEviup8 21 31u2i31 O < bn.s 0 = nerlW Ill 2 '8fflui 'iEEup" 2i 811J2181 ii$fli 02 0 = fl8fIW 2flJ33O 2ifi Th 2B3 isbsq2 A .(IQ!) siiwo3lsH .(IQQ!) o!3d3M bni (VQ1) nBm±sH àQI) ebeH .1 zw 3r11 ia ioda! 81k1 bns 1 i8eini lo 3ifii 3ftt i 2bIorI3worI moiI 11iqs3 ini annil lodEl iirI &iT .(I) noiiBup d b3dil3a3b oIoru1oi 3rIi diiw 2boo s3uboIq oi 2OiB 323f! 21J sdT :2o3 I6r1ii&n i5rb 23iup3 i3ubolq thnitEm iirIi ibirlw s nioq sili oi qu 1iq bilE (01) (11) K(—1)='k IEin3I bnE Ew f1i iol !qqE bluow 2noiiibno3 1o lEIimi2 £ iiVii3E i3IEm £ 2W Iiiqn3 nEmurl uI .10i332 JEfIJ ill 231BI 01 x'V ti MUISI8IJIUQ3 1VITIT3MOD 3HT .4 &Ii a8200r13 bns nsvi a bns lo diEq 8th 23)LEi 13fflU2fl03 3viif3in331q5I 3f!T .(V) bm () oi ojdu2 () 8smixm ot H 1i t 1° 2ffiEq I' % iiX \ . WE1" 10 2iino3 311Jio1 i&1i 02 "noii3ubolq 3mod" on i sieth idw ni tii 13bi2no iifl 3W - = i bns ((Q).p3 d) - - \ = \ 0 = i&f o 0 = t i1qrni 2irfi s1Is3ii1EnA .1no "3m11 nmuf1-rton 3rti3b n W .(0 < 0 = ) noii3uboiq 3mod ritiw I3bom 8th 3xon 13b12no3 Iliw 3W :2woIIoI a (ô) 3thw-8i briE 1 + A = H r!i1ew (1)  (+ 1)i - 1)+[r(- )%—+ i]-i ni b8in323iq rf bun u A W ) ot iq23i rliiw 2cloiiibno3 i3bio-i21u1 suiT 1S3i2'flq fIOitqffhLJ2nO3 fI3ifIW gnols thnq rIiwoi bs3nnlsd n iidirlxs Iliw rnonos 2iriT .xibnsqqA sth .jnnj2nO3 ninrnsi ( bun ) 2noiis3oIls ioi1 slulw sini 8mn2 sub is woi lsiiqE3 nnmurf bun Iniiqn :d nsvi Sin disq r1iwoi be3nnlsd sub nols 2noiiibno3 muiidiliups srIT (U) (1) - + -1) - 1) = (1) 3111 iol anoitibnoo 883fl 3111 31 (I1) nihM-i-E1a bnB onsH briE (QQI) thhsM-i-aI2 bus nsi1LijM .rliaq rIiwoi b33fiaIEd 510 n3tix3 11 (1) II i \ Y3-I (VI) _ L'] (81) :xt+I { (Q1) - I=(ô+) 1:i.l lo fl3iiaI rIi riliw bnB IEiiqB3 no 1flU131 b 3iE1 ISn 3f11 fIHw $IWI IIIWOIg 3111 dm1 (1) rroiisup3 b e1si flu ii tfiUp3 sill siIi 1cIBis ( 1) nohups bn i snflsb (4 1) noiiup3 .noiiuiiicIu Iwioqmsnslni bsvhsb i ( 1) noilup3 1snq3 nEmiJil gn!3ubo1q eno sib briE boo niouboq tole sill nsswisd niulsx (VI) noilup .21ol33 owl sill ni lEliqE3 flEmiJfl briE IE3Irlq no niulsi lo asli si ni 1i1sups sill moñ S1Ei srne sib lB WO IsiiqB3 nEmufl -- 253O nOiiElumu33B IEliqE3 flEmIJfl sill ni muiidiliups 5du13a3b s1E1 1sniism sib nsswisd liIsupe sill moI beviisb 21(81) noiiEup3 thiiq lE3u2flq briE noulqmu2no3 2E 1lEfliT .IanqB3 nsmufl no nui1si bo sisI Issi sill bits slu2ieI bns flOiiqmtJ2flO3 neswisd noiiu1il2di lo noil3Ml £ as bsaaeiqxs sis asfdshsv us sisriw monos sill iol iniE1lano3 5311J023•1 sift ai (QI) noilsups i) 1kVA i ' lo su1v sib iol bsvloa sd ns (QI)-(1) 2noilsups lo msiava silT .lsliq& IB3i2'flq b .'{) bits i sIdsiisv 1s11 wonsgoxs sib lo bas 21slsmsIsq o1onfl3si lo noii3nhII £ as bits t :sisi ibwo sill ioI noiaaslqxs bs3ubsl-Imsa gniwolloI sib nusldo sw 1)-(1) anoilsups moil (Os) - - ['(+ - I) - 1)Q]} = bns K .\ aieisrnsisq OIoflf133l sill 1o fIOii3flLit £ 21 '[(-t)a] (Ki) = (X sisilw 7LsY so tcO tioi%\o 00 iol 8mu22A °'(Q1) o1sd5I bn '1oi bn2 (1) onhoo sd b5I3biano3 323 3fIi ei 2iflT nmuiI 3rD ni bo1qm3 21 1oi3s1 r1i rDer1w b insbn3q3bni 31B 23iE1 xEi 3mo3ni ioist iIi n3i1qmi 3111 ed ol svsil Iatiq3 1s3!2r1q bn aBifiurl no nui31 lo x r1i 1D .082 2boo Isnib o 1iiqsa noiiEupe 8th noiioe-thi2 2uoiv3q 3113 ni bsi3bi2no3 E3 3rD 01 1zeq23i rbiW .toi iftod ni 8fflB2 iod1 r1i lo i8ri b3is1u3I3 ed won 111w (RI) noiiup3) 1iiqB3 asmur! no nm13 lo ii di ninimib :' - I 'd beilqiilum i 3&ii) 33i xi (1) - ( + -I) - f )( - I) = 3d Iliw ((I) noilaupe) 21oi2 owl 3112 fl 2oiiai lodaI\Ialiqa3 sill niis1si noiiaups 3113 somisrItujI lath 02 3lai xai lo lnsbr!sqobrn (I) ___ =. ,n-I :2wolloI a bssiqxs sd na slai iliwoig srIi iol moI bs3ubsi-1m32 3113 82S3 21112 iii (Is) ( I)- 8wa33d sIqrnaxs xoI) bsxainij sd 03 8!8W flOilB3ubS lii bsoIqms 1BiiqE3 1s3!2f1q 3111 11 larD sioi4 (I ) noiiaups ni xsi smo3ni laiiqs3 sill no insnoqxs sill (2u1a12 iiIoiq-non Svsfl 2noiiu3i1ni 1snoila3ubs . 4 lath :(O) noilaups ni as $11152 sili 3d bluow 'toic %\'U \tom 1.. . 8th no1a (0 < t 0 = 3) Isbom "no!l3ubolq smorl" sift io1 2noiiibrlo3 sbo-32!iI 8th ni3auIsvI ic1T .eiob in13T1ib iii bouboiq eboog noitqmueno3 bn laiiqaQ 13iarlq ir1i muees oem ifF .rllwoig oirnono no e1ai xiii bo eto&fl3 3111 no ni-ud on aerl v3wod noiiqrnuees El %XY O%O) t3 %3 %\O1 i rliiw svodB beviieb ofb oi ifihimi noiiibno3 muhdiliups lo i nido 9w fliEq fbWO bsOftBlBd O rLJWOi lo 3iE1 3rD gniiE!51 I1OiBupe 311T .83ft518Thb wsI E vewori is isr1T .('r - I) ni3nIqi :won i IBiiqB3 no rnui3l b 3i81 sn 3rD "El I (-i) (- l)- = :cf rivi won 21 noiiup3 lBflqB3 rIBrflufl 3rD ifi i3ubOlq thnigism 3rfT ("el) - (\ - 1) =1 snw 1631 &D briE 31u218! brm noiiqmu2no3 nwi3d noiiiflhidu2 lo ii 1sniism erD nvfld ii IEup3 9riT :won i ("81) il (—l)çt i +l '{ i 81u2i31 II .2rloiiibno3 muildiliups 3rD 3d bluow sarIi (O = ) "mii ii1sup" as b3!ebom a! 51u2i91 II lo noiso11s thmiiqo &D gniifli rioiiibno3 muhdiliup3 Isnoiiibbs ns ai ei9rD , "noii3ubolq 3ffIOf!" :3luaisI lo bns aboo noiiqmuanoa bo noii3ubolq nwi3d lsnqs3 163i2'r1q () ——l—I—l -l 3rD iol - I ni3s1q31 rDiw) @1) bns ("el) (4l) fEl) anoiisupe d b3nno1 m8iaa 3rD ioI riivIo rDwoi 3rD lol noiaaiqx m1o1-biib31 niwollo1 erD niEiclo ew (noii3libolq 3morl 21 31u2i31 rf3irlw ni 8253 (Es) (q - - - I) - I )ü]) ______ = (— l)çt— 4I Vi tg 1031%\5 %t' woi HTWOID MU$I-OMOJ UMA MOITAXAT . al3bom 1o aasI3 &Lt ni fbwo1g i33uIE a3XEi 113111W 111Jo11li 21311115113 3111 22U32ib 3w noi3a 2i111 nI wsi as 3W2i31" 3111 )hEmII3ned as elsi 3W .anoiiiaoqrnq Ismo1 mo2 sisia neril bns gni13biano3 315 3w .2noiis3fli3eq I3bom 3viiEini1s rhiw a3n33Thb &1i ir1i1r1gir1 bns j3bom "smil f1iwo1 3111101 noia31qx3 b33ub31-1rn32 311110 bns (Q1)-(1) anoiisup1o m3iava 311310 noii33qarII rI1woi nin-noL s11i no i11e ns 3v511 111w ais1 xsi sir1i ils 1sien ni isril a1Bvi (0) 'd ff8Vi elsi besiwnniua 3d ns3 c1iwoi rnri-no1 2133115 xsi 11353 rbiiiw f1t1o1f11 21311115113 efiT .mono33 efli lo 3351 :awollol as Isiiqs3 J53jflq no xsT (1 noii3ubolq ni oiisi lodsl\1511q53 nevi iol i 3151 iaeislni Isei xsi-Io-ien 3111 233JJb81 ti (iA .rliwrng no 133113 evi1sen £ asri airiT .((41) noilsupe 332) 'r%uVAt emii lo 11013530115 nevi £ iøl 1V) noiiouboiq 111 oiisi iods I\Isiiqco 3111 233ub31 ii (ii .A .((ô1)-(M) anoilsupe) lsflqs3 no niiitei xsi-1o-aaoi edi gniasei3ni audi 81U2i81 bn iow neewled eea) i.A 133113 3Vii5$n erli risrll 13iBe1 toil 2i 113111W jIiwoi no 133113 3V11120C1 s asrl airlT ((0) noiisupe 3111 21331th niul ni 113111w + ) noiabeb eiuaiel - (noiis3ube\iodsl) iow erli 2138115 II (iii.A 1313m5isq no abn8q8b r1iwoi no 338113 81ff .((ô1)-(M) anoiisup8) noii3ubolq ni oiisi iodsl\lsiiqs3 ".dir1 '1in3i3fffua ai \1 noiiuiiiadua 1sloqm3rI3ini 10 'ii3ii2513 3111 Ii oviisgen ai iud 23iJ15v smo3ni IBiiqS3 5 10 333113 lisievo 3113 ai ian :0 > iii.A + ii.)! + LA , lo 3uIsv en lo \13vii3sqa3TII baa xu3ivU lo eni1 di noEs bioiri1noo i iloifiw ,looiq Isanol di lo uIoik a ml xibnqqA 11 jgi xsi mooni IRiiqao iñid a ,tai 1twoi n3v! £ ioI .BwoIlol aB eo ifI3mlJgtfi viiiu1ni u1T .(4QQ1) 3voJ 'dirl1 ,1ai d1woi a33ub6l o1a ii ,i3v3woH . + oubi io1-md1 bna Aiow ol vitnoni 1i oubi .1animob 1oib iaiil eth ,wol ei 1! .(ioi13 du1aw lo rinol a) Iui3I iol Aiow lo noi3u3iidu niaiuoon %%O t cn%\o %' kOI '.(Q1) 3voJ bnB xu3v3U d mo bemnioq vii3n i i1iwoi no xB3 :IBiiqBD nimuH no xJ3T ( newmd 3mii b noiifi3ollB n3vig n io1 Vi) noii3ubolq ni oimt !odB1\I1iqBa r11 ii ml (1. H &f thIT .((f)-(M) 2noiisups) ffimiqB3 no Inum3 xB14o-aoi sill rIi3ubsi urii s1u2ieL bnf3 iow .iIiwoi no 138118 sviiBsn B sum flB rumi ni ihiriw + ) noibsb sm2isI - noiiB3ubs\lodEI suli 381th ii (ii.H no 2bnsqsb rliwo no ifls silT .((ô1)-(t1) noilEups) norm3lJbcnq lii oiiin odEl\IBliqs3 1lnsi3i1tu i \ I noi ildu linoqmsnsmni 10 'fl3i1aBls sill Ii 3vi1B8n i mud asuIv lslsrnBlBq .(I sionlool see) wol Ilnsi3fffu2 o ilgid no xBI smorn 1811 B3 s lo ls11 ltinsvo sill i lsrli :0 > ii.H + i.H , lo 5U1BV silt lo lsvil3sqs11I .svilBsn i iliwoi :noiiqrnuanoD no XET (E IodBL\laliqa3 sill 2333118 until ni rbirlw ,( + ) noiieb 811J2131 - noilB3ubs\lodsl sill 2338118 ii (i.D '.3V1lBsn i uliwoig no laslis silT .((Ql)-(81) bnB (ôI)-(M) 2noilBups) noiiauboiq ni oilin esxsi lo 2138113 sill islis bluow 1 . noiias iii bsmnsasiq Isbom "smil win" sill 10 2nOilB3ibiboM :25283 gniwollol sill slqrnsxs ioI i8bi2noD .2lsnnsfla sasrll 10 ils io emo ilguoiull iliwoig no rnlol-bs2oIa s 2svig won (0) noilsups siolsisuli bns j = + ssa iuli nI .(0 = ) slu2isl o14 (B .looiq lairnol £ ioI xibn3qqA dl .lornq lannol a io1 xibnqqA l1 ocO tns is rfiwoi iol noiiE3!Iqmi odT .((lQl) oI3dsSI 3IqrnEx3 io 31n1 rliwoig fI1 iol aoi8iqxe woHo1 ;(aiIe diwo on srt xEi noiiqmuno3 ) 0 = i.3 = iLH = iii.X (lB = \ noiiibbs iii 1I .(3113 rIiwo, on 3vnrl xBi Its) 0 = i.H 0 = ii.A + i.A (s irlT .(0 = ) H lo noii3ubolq 3r11 ni t8in3 ion ob lfiu!qB3 IB3i2riq bnB (0 < ) srnii wsi i smi8J (d :3is 2noiiB3ilqmi 3rIT .(OQQI) szijJ ni b,3bino3 3c11 i 3wisI)how di no i3Bqmi sii rlguofIi Ino fiiwo1 iafl lEiiqz3 nsmijri no xsi i) 0 = i.H (Id (noiib .(x81 noiiqmu2no3 B 01 zuootsna Iih3q 2i tsiiqsa nsmijrl no xBi ) i.D = ii.H (d &nifl aodw oie "18lBrn" £ ni b3aubolq i IBIIqE3 nsmurl bns (0 < ) &nii wi i ui&I (3 = li 3v3woH .3viieqo 3d fliw ih b&Iiin8bi I3nnEd3 HA .( noii) b3xs1 0 = ii.)I + LA (13 3iui3(thow 3r11 no 13TIs ii iiuoid1 Ino 3iBl thwoi 3ft1 13311B 111w 811103111 tsnqB3 110 X131 £ i isclT Iliw xsi emo3ni todBl B iawiino3 8 .((l) noiiBupe e) xsi noiiqmuno3 8th 01 Iwootsns noii33b Is3i2dq lo noii3ubolq &Ii ni oiiB1 lodB!\lsiiqB3 8th no 333118 211 fI1JO1fIi 021B thwoi 31111011033 33111)81 .101382 1J$iiqB3 nEmurI 3r!i rh nlIJi8l lo 81a1 erli no 338113 3331ib 211 rlguoicIi I1ns1sviupe xo jsiiqE3 m131 8th 32B3 2iffi nI .(€.4 noii33 -- )k o\bni IX 111 $ID) noii3ubolq 3mofl\emui ii1aup i iu2i8J (b 8th bnl$ ((l) noilELJp3) Ifiuiqs3 nsmurl no fnui3l b it 8th o1 nolifiupe 8th 11! 1B8B Ion a8ob + 182r331q 3loM . ((tI) iriiduo5l bns iii8n81-ie3HM s2) (B 32B3 111 s I8vi2W381 b3vIoB 3dns3 m81a\2 (2i3811s i1iwoi on &1 xsi noilqmuano3 B) 0 = i.D = ii.H = iii.A (lb VI %kWO1 = \ noiiibb ni II .(leT18 iIiwoi on 3v&I a3xBl us) 0 = i.H 0 = ii.)! + i.A (b s th3dw ;Iebom diwoi uon8obn3 iom ni ni3ubs1 rbwoi axl 3mo3ni 101351 31o1313fIT iu1e 3rlT .liviias iuiI 3d1 lo noiis3fiiqs 3111 no bn3q3b n!3ub91 dlwo o1s i xsl noiiqmu2no3 81u2i81 s1fIw 10 31112131 on ar 813111 81311w 2Isbom iii thwo1 no lEliqs3 1saiilq bns nsmud no 3xs1 lo 3(13 ff1 Isnqs3 1s3i2dq lo 51sd2 3113 \ l3i3rrIslsq ccli oi Inoi31oqo1q sis ioa1 sldi3ubolqel ni 5ID 21 on gnisd 32 gnilimil &1i) b3x53 ion i lol33s (loiil$3ub3 3(13 isth b3bivolq jEiiqs3 nsmurl lo noiiubo1q = \ Ii diwoi no 2333118 b3vil3b 21111231 3151231 2noin2oqolq owl 121i1 3(IT .11sm1o1 siom 211u2i nism 3± 33s12 won oW .r11woi no noiisxsl omo3ni xois1 lo e11o 3111 nib1sg31 (s4QQ1) iniduoSI bns iiio11&1-io1iM ni i kwoi 'i" s \ \ I noiIioqoI z ? i\si\w 'p \tii ou O %1 I gnbhow tnsq2 omit nsvi 101 rhwo1 no sexsi lo 2133113 33311b 8111 wode (I ) bns (0) enoiisup toojq no i3sqmi 113111 r1IJoMfi rIiWO1 no 2133118 to9librn Isnoiiibbs evsrl eoxsi lodsI bns IsiiqsD .nibu1e 10 .ovi3s3n 21133113 flIWoIg IlSIOVO orb isril looiq 5 23(1333)12 xibnoqqA sifT . + (131(1W ni 0 = 313(1W 3253 Orb nn3bi2no3 d bonisiclo 1i2E3 OlOffi 3d fl&) i1uoi 3111 101 noiiiulni oriT :xsi iodsl orb ni 325313fl1 ns lo 2333118 orli lebi2noD .ô - boeeeiqxe od (153 3351 (13W01 orb oes orb b3ns(13nu OIS Isnqs3 nsmurf nfls11Jmu33s 211210V gniIow 01 ff111381 bns lo 1203 ovilsIol sf13 olirfw oweiol ni inoqe omul 3113 33(112 )hOW 03 fr11i151 orb 01 loeqeol rliiw bSasol3ni 21 \iiVi135 oiuei1 orb 01 niulsi .enu1 di f1o1R noitfinE1qx n o1 (IQQI) o!3d$1 33E 81 o% \o nnrnurl nhis11Jmu33n ineqa 3mui 3111 2e3Ub3 31u2i31 fli insqa 8mii ni saSel3fIi nUJ2ns &fF b3xsinu ai n3rli Iliw Is1iqn nnmurL ni in3mia3vni ol niuir 3111 ni noiub3i effF .niiflei all eio113rIi bnn hflqn3 'mono 3111 lo r13woi lo 31E1 eth 31o1313r11 bnn bsub al ini iaini Ise mulidiliupe edi mdi 1qmi 1nqE3 ni aennf13 lo a 38Th edi iol ebnm ed nn lnemuin islirnia A .einia bneia 3(11 ni be3jJbei 21 ai \ nedw asini xni owl edi no abnsqeb eis diwoi edi lsth wofla ol an IEsw an diwoig no aexEl emo3ni .eviliaoq izs oi t iS33%% ?1 t \tot o utoic ok aoi1ioqoi \ic o tw M'ijoi w-io\ fO o\Thi oü iok wt \c \ik H .m-n % 10% t\3 1W%k ioi f k' i i\o1% = u%'5 rni-no ti ix xo\ iokoc% O < ioc\ o abneqeb diwoi no aexsl emo3ni iolanl lo aifls 8111 tnzIi esa ol 'cane ai Ii (€) nolmnupe moi1 1ooI 82S3 ieiisI efli nI .O13 01 1nupe al flefiW rlainnv erIi 15(11 bnn E\ lelsmnlnq edi lo eia ecu no I1Si3Lr13 :d ff3Vi 21 monoae ecu lo elsi r1iwoi be3nslnd 8(11 (4) .aexsi 10 tnebneqebni al daidw 3258l3fli ns 1no Isnqn3 nnmurl rijiw be3uboiq 21 1531q53 nnmurl 11 .elqmia al iIuaei 21(11 ioI noiiiuini silT 01 rnuiei erli e3ubei oals 111w 11 lud floTle 10w ineTw3 01 niulsi 3(11 83111)81 111w elsi xsl iodsl ecu iii erli eio1eief1T .Jnuorns 311152 3(11 d (iivii3s eiuis1 erli 01 niulei edi bnn) noiisIumu33s lsnqs3 nnmud .begnarl3nu al -- eisi f1iwo1 3(13 aenimieleb 3253 airli ill 113111w -- nibifla ineqa emil lo fIOii3Sll s5 uso o1\ iud) eiu2ie1 on 10 ea 8th to1 1n3231q oi iase i 3viii2oq 21 \ n3ilw i1u291 8th iol noilluini 3rlT 8vod b8woJla 3W .("rloii3uboiq 8mofI" io "smil ii1up" 2B 3Th2i31 lo 28223 in8ISviup8 8rli ni 8fflS2 8th 21 812 (3gw xsi lo 1811 8th .8.1) noiiEIumu332 121iq23 n2rnurf lo 1203 8th bn2 ol niuii 8th O = n8dw lath 110121381) no1123o116 8mii 8th rnvB31 ,xsi lodEl ni 8f1$fI3 £ d noirioqoiq 8(1122 3(11 ni b3t11E ,8Idil3IJb3b-x61 'bvil3113 21 n0ii611Jff1u332 IfiuLqsa flBffllJfI lo 3203 8th 331112 ,2m133 131110 nI .b31I2f13flhJ 1aoi2'r1q ii 13v3woH 121iq23 nsmud 832h1mu332 ol 8v11n83n1 3(11 i11a ion 28ob noii2xai 8mooni iodal 21 1231q23 namijri oi inui3i 8th (O < ) 1aiiq nsmurf wen 10 flOii3UbOlq 8th ni beau oala 2i Laiiqao lo noii3ubolq 8th 111 beau aluqni 1231q23 Is31ar1q lo 1203 8(11 121u3it12q (II .1203 211 fUlfil 810ffl be3ubel 3loM .8ldii3ub8b xBi ion 812 aiIJqni 823(11 831112 xsi 3m03n1 10(121 8th d be3ub8l ion 21 1211q23 immurl 8th 111 181118 thiiq& riamurl ol noiiibb2 111 anJqni 181130 n2 11 (E1) Ieiaoff d b8128gua 22 I1E18fl3 1211q63 nBmufl 8loI5lerIT .iaoa afi 112th siom d b83ub31 8(1 111w 1r1u131 211 jSiiqS3 namud lo noil3ubolq '8321 x21 iodal 8113 111 822313111 112 'c1 b33ub31 3d 111w fl01121UfflhJ332 21 siuatel lo alebom "noii3ubcnq 3rn0119 bna "emil 1iE2up" 8th 111 8121 123131111 1631 8(11 12th 8ioVl 3morl" bnB "3mui ii16up" lo 33fIsIBvlup8 ir1T .(O = 81U2i81 on (111w l8bOffI £ (H 22 8(1122 8th 26 belsbom 21 81u2181 32111 3321 8th mon 23111281 31u2i81 on lo 32 8111 1131w 81112181 lo alebom "noil3ubolq ,23223 owl 323111 nI .2101321 eIdloubolq8l 03 31232 01 211111181 1(1212(103 rliiw b33ubolq i1vii32 33tEm-non £ 11ai1na1adua 21 l8bom eth bill boos noi1qmhJ2ffo3 38)h2m-IIofI £ 22 b3181q183fli81 sd 81o1318th (123 31112181 .31112131 0(1 21 313(11 fl3IflW (11 8(10 0iif131SV1Up3 1a11q23 n2murI (10 3121 Xli £ 12th wotl 03 2B8 21 ii ,101332 i3)112m 2 818W 12i1q23 n2mufl Ii 11sniq on OvEn 11112 bluow x21 811103111 12iiqE3 2 8283 38th 01 elidw O = \ nonlw novo n13ub81-r1iwo1g 3d bluow 2138113 thwoi bluow nnoari lo 8th1 thlt iRfil 11 ai II .1(1m1JgtB thIt lo 1toitBtfl231q be1iRlb B 101 (EQQI) 13io1T 2 iw amuli odw 1oio "itmm" B ui b33ubolq aBw 1Ltiqa) nllmud Ii bi12v d ion .(dQ1) io$I bns ill3unaM ,a3not bill (BEQQI) IIua o1u os% woi ni 'ii noilioqoiJ lo noiieqanI .emii elu2ieI lo noii1I 3th no 2bn3q3b 3isi thwoig 3rIi isili 2wofk (Os) noilup3 1ooi1 ' to1x11i bifE 10 noii3nul bn riiod i&li 21s3v81 (!)-(EI) anoiiupe lo m312y2 srb 1 rfiwo 31o13I3f1i bn iii gnIaE313n1 i 3mfl 81112181 iBrli evoq xibnoqqA 311T . lo nofiDnuh bubi tow n33w13d 3310113 srfi 13i1 Iliw xfii noilqmu2no3 A .2woHoI niri noili2oqolq 2irIi io1 noiliulni &IT ioM .i&fl noijuiii&11J2 lo e2u8d bnsd dio 311i no elu2isI bns bn&1 no 3d no noiis3ub3 bilE nA .insino3 iqsf i gnibn3q2 lff3mrnvo swicf izfl8 8mo3ni nibnoq29rio3 on 2i 88r1i 113111 ievsworf ioi33 8lBvq sill lo noili2oq is2s lsn sill iii snsr13 niils11o n 28i[qmi slii xsi nin-no1 B fl 82B513n! .bsfl2il 21 lniB1l2no3l8bud 2ln8rnrn8vo sill isrll 02 f1isqnoili2nsIl sill gnols 1o3382 3ilduq 8th 2iv-g-2iv olsds5l 'd inemelsl2 sth bns i1u2st 2i111 neswisd noil3ibsllno3 lnsIBqqB erli 2niBlqxs inemuis 2irfT .2133113 thwoi on svsr( esxsl noilqmu2no3 ni esnsi1o isili (Isbom IBlimi2 isv s lo ixsino3 sill ni) (1Ql) Inuoms Islol sill d lauonsobns benimisieb 21 siuiibneqxs t113fflflI3Vo 113111w lii 3253 s 213b12no3 oleds5l no!lqmhJ2noo sill iii 825313fl1 us moil 28Ufl5V81 xsl Inoilibbs 3111 ISIlI 02 b3i38l1o3 nisd 8U113V31 XIII lo bits s1oi ui o2Js b522u321b 3253 21(11 uI .213ffl1J2ff03 ol noill2sl m1J2-qInIJI S flI bslsdsI sis xsl 3(11 Isrll 02 luo 1531153 niiow Ifisqs emil sill no 2133113 floiluiiI2diJ2 bns smoni eril (Q1) 013(1551 insmnsqxs 2irIT .2133118 thwo on svsif 23X5l no!lqmu2no3 ui 23gf113f13 bns bel3eIIsnu 21 8310(13 81112151 sth nI .nibcisg2 lnsrninsvo ui 825313fl1 ifs (111w 1ef1isol 23x51 noilqmu2no3 ni 8nsf13 £ ol Inslsviups 2! exsl noilqmu2no3 ui sns113 B bslsbiano3 sw Ii nisido bluow 1111281 2o1sds5I jsbom mo lo lXslno3 .(2szsem3ni gnibneq2 3ilduq isili oz) bsnsd3nu 101382 elsvnq sili lo noili2oq 15225 lsn sill ntvssI oV' i %woi i-xo\ io b o k K 1 noiti2oqo1 zs;c ItnO t OS%\O 1WO1 th t to ' Il •() noliBupe 33 1001'! ni $ID itliw buboiq livi1as fiB 3mi3f t3f{w f3bom ni i noiii2oqcflq ic[i o1 noiliulni &IT 8uB33d jfiwoi nirt-noI b11s ion 23ob 31u2i31 briE IodB1 nwi3d 3310f13 ioi1 81di3borqI ubi bluow xEI noiiqrnu2noo £ 01 enoqi ni noiiB1umuxB EBiiq nErnurl io\bns !R3iffq n!3ube1 .51Ui3t lo iuo 'iiliiu on 1111w l3bom £ briE Lebom (8mii iithup io) noli3ubrnq 3morf 5111 nsswisd 83n91&ulib i i 818ff!' no bris 81iJi81 ol noiiqmuerlo3 lo oiiBi 8111 no b11s riB aJ$rI xEI noiiqmIJno3 3± iemio1 3rfi nI .8iie1 -qmul £ 01 inelEviupe i xBI noi3qm1Jno3 £ isiiB1 8111 nI .monoe 8111 lo oiii iodBl\IsiiqB3 IIE13v0 3113 18bom 38f1J n33w13c! 8311818Thb r1T .noiabb noiiB3olls iuoi flE bollE ion 3ob ii ni xBl mii d3irIw lo sno boo noiiqmu2no3 owl riliw bborn i !born "noil3IJboq emorl" 3113 -- 1s3io1 8iiup i iU031 lo noiiB3ollgel B 2SvIovrIi xEI floiiqmuf1o3 B io1siar1i briE b3xBinu i io ii ?.?3 iki s'H noilioqo1'1 : \i\s % ikwoi ik 'ico\i as b321qx8 3d flB3 oiiE 81u2i51-oi-noiiqmuano3 3111 () briE ("8!) ("!) (41) noiiBup niU Ioo1'1 woI1o1 aboo 33thm no mdl ot Iaup xEi noiiqmuenoo B ol 31u 11 hluoo "noii3uboiq 3fnod" moiI luqluo i1i II .xai mu-qmu1 8 ol in1fiviup d iobi&!1 bh,ow bits 2noilsoollB muiidi1iup lo1s ion bluow xsi noiiqmuenoo 3d1 () - l)(X1)( - I) Q]'-( + I )= 1 tsrtir! s bus 51B1 xsi 8(1103111 loi3sI 13fliff hss!D .13isms1sq IS3iO1Of1f138i lo noil3nul B I Q 813(1W II "noii3uboIq srnorl" 01 2boog "ishsm't mon noiiqmhino3 flirk 01 bn3i Its 111w xsi no1iqmuno3 noiiqmu2no3 ilirk 111w Isiibivibni viikoq 81B 8xBi 8m03m 101351 113(1W :31qm12 I3V i flOillIJifli srlT -noiiqm1Jr1o3 sth Aiisi 11B3 sno Isifi svis2do 01 griiisistni i II .suiI s 1bu2 livi1s bsxsi-non 5 ol 4asi1ir1 sili 21 XBJ 8(1103111 IsiIqs3 B 01 138q281 rlliw X\'D lo ti3ii2S18 3111 :fIoiiBXBl 10 2133113 flif13iiW2 iodsl B oi l33qsi rliiw ii3i1s13 arli d bus xi rloiiqmu2no3 s 01 bsq231 rlliw s1i3iias1s 8th c1 bawollol .xsi 3mo3ni IYJAVIA VIOITAXAT JAMITO .ô 3imono3s 138115 28x53 lnsisThb rbidw r1uoir11 2bnnsrb arli bsir1i1rIhf asil f10ii332 auoivslq silT isnnslq 151302 inslovsnad s isrIi 3i1oq xsi sdi 'bu3a 8w noib8a airli III .nOilB3Olls snuoasi bus fliwolg sariis5I" s as nWOmL ai irIT .ai11sw a'irias avi3s3naa1q31 sth simixsm ol l3blo ni saooik bluow srli 23200113 'tdB3ovsrii irismrnsvo silt emit lo gnirimsc1 silt is :(VI 32rns51) "msldoiq asnnsIq aexsi 10 8310(13 au 01 bnoqaai ainags sisvnq wolf 1nu0335 otni nb1si asiB xsi lo rlisq sv1ovni iaiil sifF .(OQ(?I 2B3uJ 81qmsX3 io1 ssa) msldoiq airli nibu1a lo asw owl 315 sisilT 1331du2 nOi13nu1 1i1ilu l3slibrii a'lsmuanO3 sill simixsm 01 as oa in8nmisvo sth d asxsl 1o 3310(13 sf11 iol bswoHo1 ai ll3solqqs airlT .msldolq noiissimixsm 218ffiU2flO3 8th lo anoilibrio3 lsblo-ialiI sill ol .sldiaaoq oats ai (8I) s1oi bus 2B311J rio beasd d3so1qqs tnaiaThb A .(8Q1) sIcricrlD c1 s1qmsxs msIdoiq s as msldoiq ainamcrisvo ath nuiB1um1o1 111 21212(103 rl3solqqB airfF iabud 1sioqm3fl8urii 218ffiU211O3 sf11 ou bsifiua 11331ib asiuilIlRiJp 25200113 1n3mfl18vO srli 113111w ni 1s3iar1q bus nsmurl 1o noilB3oIIs lsmiiqo 213fflU2flO3 ath oi b3lsIsl alnisllano3 sth ol bus lnisllano3 OW%0 \tir o%\, cffiw dsoi sesiIi ;msldoiq tsmiinoa srli b DOL srD moil bsviisb SiB 2ifLLBui2no3 33f!T Isiiq misi ni msldoiq srIi asiqxs 01 a o esxEi bns ainuisi iois1 idt luo svto2 01 bsu sin DOI mii1 s1l xsi snimislsb 1li3i1qmi nedi iliw insmcnsvo sdi sd risorb asiliinsup Ismilqo silT .Ino 2sililnnup 10 11) is is iisrlD ;(OQQI) nsuY ;(OQI) as3uJ d bswollol fbnoiqqs sill ai airiT .aniuisi iols1 bnn 28lSi .aisriio norns (4QQ1) iniduo$I bun ittsri&1-iaeliM ;(d sFQQ!) is Is asnot (Ql) HuH ;(4Q1 Woi zii\c M o oo? I .ô a1nsrnnisvo sill no lniBiiano3 ns svlovni Ion Bsob isbiano3 sw msldoiq noiisxnl Esmilqo silT sill abiow iedio a! .inisilano3 3nsvloa llsisvo sill lo noilqs3xs sub rbiw 4sbud boiisq-sd-boiieq svii3iilasi som lo noiiieoqmi sill wofi aauaib isisi sW busi bus woriod 01 bswolls 'lssi1 ai insmnisvo aihjasi iuo lo sliJina sill ai3s1s aensnfl lnsmrnsvog no alnisflaao3 sub ablsi xibnsqqA sub ni bsvloa bns bsinsasiq 'Hsmio1 msldoiq noilExEl Lsraiiqo sflT 81:ailuasi gniwollol ioci\ i scw-o\ \cio rth\ \ \o ' ô noithoqoiI .xibnsqqA sub ss 1ooI rIlsq uliwoi bs3nslsd s no!s O1SX 01 mips sin asxsi 111$ isull niaasu ni aiaiano3 looiq sub lls3iiaiiusH bns anoilibno3 isbio-iaiul sill d nsvi anoilnupe lo msia'e sill ol noiiuloa n ai airli miii vliisv nsrfl bns 3d) lo nupinu ban n31ix3 s11numo1 voiq Ion aob i [nnn mo e3ibuia noilnxal lnrnilqo rl1o th .nnlq noilnxnl Inmuiqo %ir %flO NO1 thtoqrn3n3iffi rnri-gnol s anolaib Isñi xsi sns mdi iaggu2 1rnsrD b 2i2sEsns &IT .alnisliano3 8111 283ub81 isrIi noirioieib xsi ns lOvl3mwtI r1iwoi 2uon8obn8 n it! .oi 01 1IJp3 i 3d b1uoil noi2beb 1r13251q bo 2m183 iii) 21203 mn3nsmi3q bits is1 3vsd 111w mono 3111 lo 33sI i11worg nirl-gno! 3r11 3111 33ni ol 15up3 i 3d 1o1t3c1i bIuorl2 bits (3i1iiu bits noilqmueno3 12o1 b 3LJlflv beinuo3sib bhjorla 28x51 lsmilqo Its lsdi awollol Ii 281sx xsl Its no mn3bn3q3b 21 ffIOflO33 8111 b sIfli fIiwo b33nsEsd .niri no1 8113 l1 oes 3d o ti toi oiV' iti noiii2oqo1 OI 1t %tO3 ! .xibnqqA 8111 88 Iooi1 2! n3flW .8vods b3rnliuo 320111 01 islimia 815 28X53 811103111 io3sI iliod idi 231u281 8111 iol noiliulni silT 21 n3rlW .niri no1 3113 ni oi 01182 3d so1813c1i bhiorfa bits ni3ub8t-rf1wo 315 aexsi iflod evi1iaoq 382 ed 8101818113 bluorle bits flOilS3OllS 3311J0281 io\bns thwo nui-no1 riomaib axs3 sasrll O13 01 lsup8 1t518v0 3111 ar11s ti lud 1si rIiwo amono erli iasTts ton asob xsl noilqmusno3 A .013S 01 lSlJp3 01 Isups lea 3d oats bluork ti 3311311:28310113 noilqmuano3 25 Hew 2B 'mono3e erli lo 01351 lodsl\1sliqs3 nut no1 8f1 ni O18 ni 51D rlliw be3ubolq 21 eiuai1 81811W 1sbom neewied s3neleThb lnslioqmi its ai el3dT lnelsviupe 2! xsi noimqmuano3 £ 18I3SI 3113 uI .811J2131 011 2! 313113 113111W 11! alebom bits 2101351 3Idi3ubolqsl \ Ii 31om131itiuT .nslq xsl lsmiiqo its iii beau xsi 1no 3111 sd 8101818111 bluow bus xsl mua-qmul s 01 .isnoiflo1aib-non oats ai xsl ernosni -iodsl £ ii1i3u gnhd Ion aeob 811J2181 bits O13 01 1sipe a! 31ib3mmi nA .yo1obod1m Iimie a iol (dQQl) iBaoSI bria iII8uuugM noL bus (aQQ1) Ilufi 3111 11311w o13s i I3bom (OQQI) eaxiJ 3113 th 31110301 'todal no xai Iamiiqo ifi iadi i noiiieoqoT audi o noiiaoilqmi .aIJon3gobn3 al lstiqao namud lo noiialumuooa %XV O%WO ti O%\5 %3 5kHO1 b83nBEsd 3111 ol noiii&twii 3111 noIB noiisxsi lsrniiqo Io sui b31&ñlqrno3 sill bui ion ob sW isbud eis1umu33B ol 2bssn insmnisvo sill oiss si oxsl niri-no! nsrlw 1isslD .ritEq rliwoig sili iii asaii bsisIumuo3s sill illiw smiibnsqxs lnsmrrisvog s3nsnuI ol isbo ni mn flort2 sill ni 2s2u1q1u2 sill d 1nism b hsl3s1srl3 1 msldoiq noilsxsl lsmilqo sill 01 noiiulo2 sill isdi 3gi 2irlT .nui nol lnioq s nni gnol sill ni noil5xsl on bits mn rloil2 sill ni noilnxsl rIgid gnnulss1 noilsx5l lo sliIoig smil sill no s3rIs!1s1 Isill woil (sQQl) .ls Is asrto1 d znoiislumi .(dEl) .ls 13 asnot d bssfl2 noili2nsll 3111 isili isgu2 (l) 11u8 cl noilsIiimi .ilirl 21 noili2nsli sill gniwb xsl noilqmu2no3 .rioik lsvilslsi ai bohsq bluoif2 insmnisvo sill isili siiils1 3!leiIsslnu sill asif mslcloiq noiisxsl Ismulqo sill 01 noiluIo silT 2li no anmlsi sill i!guoirll gnibrtsq2 lrrsmnisvo smiul soasnfl 01 isbio ni asaulqlu2 lsbud sislumu33s noilsxsl lsmilqo sill 2i2lsrtS irll moiI anoils3ilqmi 3iloq vnsb 01 isbio ni laril aia3gua iilT .aisas iilids alnsmn1svog sill no anoii3hi2s gniaoqmi d lsmsn -- llnsisThb bsis1umiol sd bluorle mslcloiq boiisq lsbud bs3nsthd s b noiiiaoqmi sill 21 alth lo 5253 gnilimil A .1lsoqmsiislni bnsl bits wcmod ol bits (€Ql) oitho3sq .bslqmslls nssd asrl siiaai aiill1o i'lsn EsrnioI on sgbslwom! mo oT .bohsq d bs3nsl.sd £ sbnu slul3Inlla xsl nisimixsrn-i1iwoi sill bsibul2 svsil (4l) iniduo5l bits illsi&I-iasliM lqqua mdcl 1(3111W ni 25253 iii bsi3lJbnoa llsnnon sin asai3mxs sasili Isill bslon ed bluorla II .lsbud a! ia'lns sill led oa 135115 rllwo on asil xsl noilqmuano3 £ mill asilqmi lsisnsg ni airiT .3ilaslsni 21 smm sill io1 .noilsxsl lsiiqs3 Is3iarlq bits nemuil b noilsnidmo3 lsmiiqo sill lo 'bula sill 01 bsl3i1ias xnl niNimixsm-cllwo sill mill slsluosqa SW al3e11s rtiwoig asil xsl noitqfnualio3 £ il3IilW ni 3253 Isisnsg 'lno al3sTls illwoi asil ii isth nsvi xsl noiiqmuano3 sill no 331IBil5I vssi1 siulssl Ilila bluow slui3Jnlla .anoiai3sb noilslumu33E \ll3sib 1331IS oals asxsl smo3ni 101351 sliriw anoiai3sb smaisl\iodsl sib ilguoiill .asitjls1 islimia svsil sm 3iloq xnl isb1Jd-be3nclsd ni.imixsm-sis1lsw sill' TD3A V1TATITMAUQ :HTWO$tO UMA 3XAT .V %! %M%%O t3 O%1\$ NO1 3m113w bitE 3!ffl0fl0380l3Bffl 3± b3ibu1 svEcf IO±uB tev JI noii3 ni tuo b9inioq A zeimono bliow Essi lo ij1 i113i oi oe 2Iobom nhitdiIs3 d 2rrnol3i xEi lo 3it3UflO3 io5I bitE iII9lJnEM no1 (Q1) miX (O1) obd3$1 bitE gni)I (Of) 2E3uJ .(U rfi lls3iq3) noiJExEi 8(1103111 !EnqE3 moil flirk lo inemii3qx3 ai1oq ern lobi2noo (EQI) oniioo&1 bits (sEQ1) 1[si3nsi2du2 nithsido 2I8bom ri3woi won3obn8 lo 3X83flo3 8(13111 xsi iodsl io\bns xsi noiiqmuano3 s 03 ee3smiia 3n3i3Thb a3± istli work (QI) oIed3$I bus s.1oi °.th8113 315113w bits in8leThb amlol3l xsi lo 811s iLiwoi (1guorlils 1adj suis bits noiiEidi1s3 bits 31u331r132 at3bom oth no bn8q8b 2103351 lo fl0i353011531 81E1 8(11 lo 821JE38d 1si3nsdua 3d (153 2333113 315113W 3(11 i23bOm 3d 03 18)h1 315 J1311qx3 35113 work 011W (1QQI) 3V0J bits xu3i3v3U d b3tloq3i sis nibni1 is1imi .2101332 aaol3s xsl lo 283fl8U32fl03 315118W 3rD lo noilsuIsV3 3113 io1 insfloqmi ai aairnsnb Isnoiiiansii lo nOilEi8b121103 lo i8dmun s gniwollol 23imsnb 323± lo noiissrI8i3slEIl3 th3ii1sfls its 8bivolq oals dT .amio131 uLErLI "1i2o3" 3iom 315 XE1 3f1103ffl Isnqs3 3(11 11! 2825813111 35(11 wofL2 bits 28git5f13 XE3 lElflJefI-3L1113V31 as o13wt lodsI\Isliqs3 3(11 111 no13ubi 3gis1 £ 3vlovni 8± 32u538d xEl 3m03111 iodsl 8(11 lii 2825313111 .Isiiqs3 nsmwl lo noii3uboiq 8(11 03 beilirk 315 21o13s1 lo isdrnlJn £ nd 3vs11 313(12 is1 oa baauaaib 3vs11 8W 231b1112 153i13io3rll 3± 03 noilibbs ni 3imono38 no 23X53 lo 2133113 3(13 no 33fl3biV3 1u(moa-2ao13 3(11 b3fliffIEX3 3VE(1 15(11 3ibu3a 1s3iiiqm3 013d851 bris 113iaE3 bits (Q1) i3r1nnL bits nsgn3 8Q1) ibn3mio)I bris isiaso)1 as r13u2 ff1woi ai ii isdi al nibni1 ,i8rli lo 31113581 itOfflfitO3 B bu1a 01 bu1a moil 13Thb a11uai duorfflA .(d sEQ1) 2luISflifffl3i3b 13(130 33110 r1iwoi 31(11011033 110 asxsi lo 2133113 1ns3i1iu1ia lls3i32iisi2 1i1nebi 03 ifu3iThb Isili 23151 xsi lo 23iIJZB3ffl ni33ui1ano3 ni 11ii3ifhb 3(13 lo 321J5338 .ioI beItoiino3 315 rliwoig nui-no1 lo IszwIiqm3 828(13 ffI3r1i 220135 111332121103 bits 2313nuo3 1o I3dmun 3g151 'I1nei3iTtu2 £ 101 eldslisvs 315 xoiq S as uo 03 31J1I3V31 xsi lo oliSi 3111 25 113112 nsbiud xsi 3(11 lo 23iU253ffl 33581gB no 'I31 3ibu1a mii wt e bn1b ei tui31 to auongox d ol bemu ii1ti el 1qque tod1 di eibu1e ed3 iii ° .eoi1ivib 1noi1&nib lo 11iow iii inqe Ion cmiI 't o1\o b8if1i3w aIflhJi31 XBi moarn 10 231W1 XEI 3ffl03fl1 IoiuiEie 1o zmii no o eisi xsi oviiTI8 313V 101 xsi 1noi1n3vr1o3 3rfT xEi 1Bni1Bm lol xoiq B a BiBb noiiudiiiib fflO3fl! niu iuniiib ion ob bnB !8bom ftt ni bnu13b 1c1hBv xsi 3r11 lo anoiismixolqqB rfguoi si swem bui2 odw (1) EeA bns ifl311&I-i31iM Esobn3M i nonqe3xe nA .xEi lo 23qi 1n313Thb f18wi3d UD3O lo lsnBq ni rIiwotg bnB in3miaevni eiBvilq no xsi noiiqmuno bns IsiiqB3 iodn1 lo i3e11e 8± bnl3 nis.sSl obn8M d beqolsvsb o1oboth3m r1i gniwoltol b3i3uli2no3 31U2E8 xBi niw 8iflnuo3 ol bI bluow 3i318x8 noiifihJJmi bnB ro3z1i 1S± ottt rhiw 3niI ni oi iIu231 li3rIT .(41) 13T '11s3iiaiiBi2 bniI ol iIu3iThb i ii iud in8mivni 3iBvnq 3B1uo3!b 01 bnei zexsi erno3ni loi3BI :13!b1q .rftwoi no 3iB1 xBi lo ail1 insi1ini '1!E3ffnono33 bns N5IAM5J DMIUUJDMOD .8 noiiBxBi 803ffl loi3Bl bilE noiiqmu2no3 lo i3e118 Oiffloilo3eOl3Efn erIi benirn&xe ew eqaq irii nI IBmiiqo 101 2noiiE3!Iqm! 118± b313biffo3 briE 113w briE r1iwo1 3!mono38 f10i1B30t1B 831UO31 rIo bns go1onrfoei noiiB!umu33E lEiiqs3 nBmiJrI 8111 lo sloi sill bsnihsbnu svsrl sw 1E1u3i11sq nI .noilBxEl sxBi noiiqmu2no3 briE tsiiq& 10dB1 lo 2ieTIe sill ninurrris1sb rn sflV!i3E 311J2i81 sill lo sIutErI sill sill ls11E Ii -- noifloiib IElnsmEbnuI sno Ino svlovni xsi floiiqmu2no3 B iBrll flwofl2 SW II lo lovEl ni smil sluaisl ni bn (noiiz3ubs briE lodE!) 2silivil3E 'tsvil3ubolq" ni insq smul nsswisd 5310113 1Sliffli B fI bSI33TIE i 33f0113 ir1T .mono3e sill lo 81E1 rbWolg srli 83Ub51 3101818111 briE 1sflEl sill no!lElumu33E thiiqE3 s3ubsl IE11I &ioittoiib 181110 svlovni o1 3XEi 1sflE! sill lud sXEI smooni d noiikil noiiqmuno3 no s3nEiIsl isivssrt sluiEsI bluow rIEIq noilExEl IEmulqo lIE lErli lasgu 2irlT .fllwO1 bnB flill nwor1 svJIiI sw isvswoH .rIlEq fliW01 bssnElsd 3111 01 rIoiliEnEll s± ni asxsl smo3rli no nEflI sxsl s1IflE81 bnsl briE woiioci 1ssi1 ns3 lnsrncn3vo sill rir1w ms!dolq noilfixEl lsrnulqo srll 01 noiiu1o sill £ ni nzlq lrni1qo niholeib b3w dt 1n31q81 taut tai xt m5tolav-bB uli lo lnmit 51B 3flT bna -l2oq atga b eiuaam nhaqmo3 'd bi31ninoo 51B 'edT .nitt ina viiaine1q31 irnonoootoarn .233nq bria amoni xal-iq 8 t oI\o orb rliiw iaBlirIo3 ni i i1u2i 2ir!T .nui noI &Ii iii nozqmuno3 bn moni loiaBl lo noiisxi oiss o 1on8qu-81E11w exi noiiqmu&io si&lw I8bom rIiwotg auonox Is3i2E13o3n ni b8niEdo i3U2 briE 3ii2iI1eInu hI3 i oi d bIuort zxBi nwi-noI 1113 isili iIuex eilT .xsi 3mo3ni lsflqs3 no noii3ifl3i lo i 3iti1E81 atom s ol iejcfu noiisxsi lsmiiqo 3irrIsnb 3nirnEx-s1 01 \ii838fI erli gnoms sfl3neoi3isrl ni3ubo1ini 1ibi!qx3 d bnisg 3d ri133 ali1ini wrthul .ioivsrfsd aif13rnIn3vo .anoiis13bino3 IEffoiiudfli2fb a23bbs ol l3blo lii ins oimono .(QQI) w1riV bnB i31sM MiM-tholEO 3Iqrnx tol , %IcV t O%Y5 XIUJ43A nobibno3 muidiliup3 nwI-2noI lo noiizvh3U IA :woHo1 bseiqxe sd n noiiibno3 isbio-ifl &JT (IA) (+ I)X= (-X)__f))s- (CA) lx (LA) (-I) (- I)= (+ I)çt (1A) - ______ = %(I% I)' l—l—I (A) (-I) =(- I)%X (BA) C) a(-I)1M= (IA) noiiBupa .(I) bnz () iniflnoa 3th (DOI) noiiibnoa l8blo iifl ow flfflIEffl81 sf11' floiiqmuno3 lo 1i1ifli 1snimm sib Isups iwm (IiqB3 IE3idq) noiiqmuno3 10 33fl wob&1 sib lErli wobsr1 siLt lo snBrb lo s3i orb :no!lEIumu33E Iiriqsa iol D01 sib 21 (CA) noiisup .boiioq isvs flu silt Isups o2lB iwm ibirlw xsi lo ton 1flqB3 lo t3ubolq 1&ngum sib 1ups twin noiiqrrlli2no3 lo snq sib ni ( I) noiiups 2snimisieb 2iib (OI) noitups uitiw isiftooT bnod inornniovo no nmiei lo sisi ni snsth sib griiisloi noiisIzJmu33s lsi1q53 nsmir1 iol DOI nibnoqzo11o3 sib i (LA) noutsijpa .ixot sisi sniism sib 2siEups (4A) noiisup3 .niuioi lo sisi thni1Em 2i! ot LBflqs3 nsmurl lo onq wobsul2 silt OE %i OcU%1cO t O1'5 MO1 i (I I) noiiBup3 riliw rboT Isi 3di oi iui3f bni rtoiiqmuno3 nwd noiiIJiii2du 10 ns3wi8d IEiiqE3 lo noi3ollE Imiiqo &li zecIn33b (A) noiisup3 .ixsi 3di ni (81) noiiEup8 enim1i5b .31IJi31 bflB noiiB3IJbs nw3d 3miilo no!iB3olIE thmiqo rIi (BA) noiifiup8 briE rioiiubs bnE no!i3ubolq :81E noiiibno 'i rsvrisfl srIT (VA) 0 ? -p" 0 = mi1 niEido oi 2i 1 (BA) bus (A) (4A) moI (8A) (- !)'5=)%(- 1)?i 'u—I —1 niisiirieis1lib-goJ .ixi sill ni (ô1) nolisups abIsi (11) bns (01) 2DO '&rnil sub rijiw ubsoi rbirfw :rnsido sw disq r1iwoi bs3nsIsd sub no1s 1n532c103 315 2noiis3oIls oi3s1 isrb 1351 sill niu bns (El) (QA) (+c)!_= x lsirIq 1o siiq wobsrf2 srfi rfisq rfiwoi bsns1sd sill no1A .bsiiimo nsod SyBil 21qi132du2 3mui 818ffW ui (1) noilsups snimisisb sw (EA) bits () niisiip .sisi sms sub is snil3sb iwm lsiiq& nzmijrl bns monoe suli lo sist rIiwo bs3nslsd sill iol noiasiqxs as riisido sw (QA) bits (CA) niU .ixsi sift 882) lsiiqs3 nrnurl ui sisi r1twoig sill nuisuips d bsnisido 21 ixsi sub iii (VI) noiisiip3 ((II) noilsups) .mono3s sub lo sin thwo beonslsd sill 01 ((s) noilsupe (E) bus (1) 2noiiiaogo lo 1orn .A eldioq 21 ii (V1) bits (1) (EI) 2noiiElJps rIi2U .(4QQI) svoJ bn XUSISVSU 2wollol Iornq sifT :2wolloI n gnitow insq2 omli lo sisuk sib siqxs 01 (OIA) (+.)3= 3lSf(W 1€ (hA) (-1)=3 c+ iisswisd noilBisi snimisisb oi (p1) bnB (81) 1) (41) noiisupe rijiw ischisoi (OIA) su n&Ii ff 5W 3 s1diiv sth riuoiu1i diwoi lo siwi srii bn + gnibtn io gniiow nsq smii efhi I (I =+ [(s- 1)3- f)+- I)(3- 1)(- 1)] I) ] -'? +' noiE1s ir(T .niqiuo Isioi ni noiiqmu2no lo sisrL sth - I ilMw ninnisd isA3id ni rnisi sIb sisrlw o as i ii . + u bn - lo esuthv sIb snimisisb (RI) sii r1iwoi sIb iol miol bsoubei-ims2 sIb bn bn + nsswisd noiiIsi sIb rSiO15iSdi :3 ni gnizsisb llR3ino1onom i (1A) noiiups iBrhJ )bsfI3 .(y)3 lo fli2 sIb no hIi3LrI3 bnsqsb noitfsi iIb d bsiIiinsbi - 2s1ubefh3 sib iolq ot Lu1su i ii 4iwoi 3imono3s no sxs3i lo ai3eTls srb snimisisb o isIio nI .gnisini EIinoionom bn niqoI blEwqu i s1IJb&13 iifl silT .s3Bq + C) ni (1A) briE (QI) sib iii .(3 lo ngi sib no riibnsqsb niqok blBwnwob io biwqu sd bfuo3 3hJbsfI32 bno3s2 silT mon stubsfb2 itfl sill i3s2lsirii oi asil siolsisili bns sno wolsd EwIE ai s1ubsrL3 sib sas3 ismiol ai lo 1svii3sqas1ii abiswnwob sIubsrIo2 bno3e sib afhula xBi nobqmuano3 sib ni sassini nA .svods nivoiq will + i bns rbwoi as3ubsi 1uouidrnsnu suiT .bsiosYtsnu isiuI sib rnvss! shiriw sqots sf0 isifl sib : ssIubsrl3s ibod aflurla IsiiqE3 IE3i21iq 10 fismiif! no XEI sib ni 555313fl1 nA nouiisoqoi .1 noiiiaoqoi nivoiq zurb sis1 rllwoi sib s3ubsi afuiris dbofl .sbiswnwob sno bno3ss sib bns abiswqu -biswqu si sIubsrl3s bno3es sib nsrlw bs3IJbsi siswIs si nibuis io gniAiow insqa smb sib iEili soi4 .niqols-b1Ewnwob si ii neriw niBtls3nu si ngis au shiriw niqoIs :smo3sd (1A) bns (hA) anoiiEups boog 3sAism s si siuisI nsfhW moX' m (1A) (-t)(-1)= c+ -F(' (MA) 1)+(3- I)(- f)(\— 1) = L J '' (V9)s_ ii 11 L I J I L \—I J (8A) ______ = 1 X L (——I)j 2! i thq r1iwo b83nB1d edi noI m3Edoq lemuano3 3di lo sodi thiw DOI sd niisqmoD di lo u38 .ox 3d oi asfi Iiiqs3 no 3iin xsi nin-noI sth isdi (OfA) bns (CA) moi1 335(1)3mm! dieoi (A) noiisups f1iEq c1iwoi b33n81J$d 3113 no1s bris A 10 3ff5f13 1o olsi 3113 nsv7ied ii1sup3 03 15up3 ai (VA) noiisup3 1o HJ ii isril a3ilqrni 0215 2irIT .oix ol Iiwp8 i isdi zeilqrni (LA) thiw 213)1351d mup2 iii m131 8th HsniI .((I) noiisups rljiw 8lsqmo3) oes oi Isup3 i ' isdi gnivolq oi ol asi! v noiiqmu2no3 no xsi 3113 isdi ni1qmi oi Isup3 3d oi asi! (8A) noilsup3 lo HJ 3113 no 3(11 3r1i 3351 8111 lol noiliulni fF .((4A) noiisIJp3 rijiw 8lsqmo3) nu'i no1 3113 ni 118w 2S 018 3d nibnid ion 21 1oivsd3d inemnvo &IJ no 2noiiibno3 1ilsmiiqo &11U21103 81i d b32oqmi inisli2no3 31S 2noifloi2ib Its 33n811 :0 = = = nui no1 3(13 nI .niwol1o1 erfi i (0 = ) mci snot edi 111 11013s30115 8311J0231 23n3mn13Vog 8th i3i11231 iOn ob 23n1s132n03 3i1srnhiqo 213ffl1J2fI03 8(13 btis b3iEflifflh!3 .33(0113 ni bsiolni2nO3 ed (153 3m13 iilup 10 noii3uboIq 8(1101! 2! 81!J218I (131(1W 111 3253 3(11 iol looiq 8111' .sior13u 3(11 moil eldslisvs 21 bns noir12s1 2uogolsns ifs xi\ cO YHA$IDOIJ8I "niv o noiixT &li lo noiiE3i1qmI 31llsW" (O8Q1) ombns ngA bnE noiJinA no2niAA 0Q cno.. •22q fl13vinU sbhcfmBD :biidmD vi1o1 1BaiI 3imnyU (V8Q1) IIoAiIio)L 33n3wBJ bns n1A 43Eth8UA xT 3irnSnU moft niE) n3i3iu13 511T" j€8Q1) 13nnb1 nthno1 bn T1O)huio)I 83fl3WSJ nEEA fbEth8LrA .001-18 w'% \so\cI "mio15I %\o'k \, " ctwoiD wonobn3 lo IsboM 3Iqmi s ni grtibn3q nsmcnvoD't (0QQ 1) ttsdo5l , ons8 .1-E01 13dOi3O nq 8Q .11iH-wiD 3M :)hoY wi4 MwpiD 3imono3H (4QQ1) nitIEM-i-Bth lsivBX bn tl3dOSI oTtE8 .VR-Q4 dmsiqe V \vo. miT b noii3o11A i1 lo 1o8I1T A" 1) 2 \cIED ib b1orIuoH :3imonoot3BM Iii tiow&noH" (1Q1) irighW tthbnESI bilE noiso5I brf3i$I didBr1nsfl • V8-ô1I • I QQ I tsdmsU \n\o°, \o wo. • "noiubu1q eiieiA bilE noiioijboiq \o \cuo. ,"ninmH 1o 3L3D-8uIJ di bilE IEI1qED nEmuH 1o nOii3ubrnq &IT" (VQI) mirioY ,doq riH •-E iwuA j riq 4.ok1 V %\o 1hqA 8 cV\5 \cnio. "is13inI 10 isSI 8rb bilE niv2 noiffixET" ,(8V1) .1 1cli3iM ,niAoa leqEq nbhoW bo8 i$l Ib&I oi 31A exET 3imBnU IsmiiqO iIi hA neriW" QQI) 1or13iM j1u8 •'!tjI 1 .0(1 "IEiiqBD nEmIJH bilE IE3idq ritiw riiwoiE) wonegobn3 nO" (EQ1) oin 2 IeunsM bnz ibiol. llEdED .Vô-401 iec1me3eU 101 c(o1o ok'% vo. "muiidi1iupd IsieneD !sioqmerieinl lo bboM beiEi E ni noiiExsT 1n3i31113" ,(8I) eiIqoiiiiiD sIcnfD •8ô1-R1 snut .oM , "2eviJ ejinilnl diiw muiidihiup3 li$leneD ni 3(1103(11 IEiIqED lo noilExET IsmiiqO ,(8I) eilqoithrlD ehnErID .-V0 M %S ni 3j1oq i1 bnB tienoM tmiqO" (1Q!) sorL)1 .t A3hi bnB onithdD .t niw&I .V.V iiarfD tisq) iwuA ,o\o \co. "1boM 313D a3niu8 8bD a3niu8 s iii 3j[0q 1siT !smiiqO" (Q1) 8ocb)I .1 3jnq bns Of1BiidllfD .1 33113IWBJ .V.V i1SfID .-V1 iiiuA OI \oM \cno. "!8boM io2-owT i lij noiixT 3mo3nI ioJ1 lo iT13 311T" (4Q!) 8voJ .51 bivEQ bnB .8 13uf3iM xiieisvU iuuA JWXX ono\ wno. , "rbwrnD uonsgobn3 lo I3boM 111 ii1oq nibn8q insrnniovoD to aiefl3 3imEnU 3rfT" (Q1) 8voJ .51 bivEQ bnB .8 13&f3iM xu313v3U uid&1 O o't,&' cuot • "leboM thwoiO 2uon3obn3 ioi-owT niqobv3b lii iliwoig imono bifE 3is1 xJ3i 8moani 1&iigmM" (Q1) olsd3SI oix bns msilliW 1i3i3 .VD-QOt W% oi "3hinuo3 "nOiBi28vfli lS3fliqm3 riB :rtwoi 3imono3s bns 3iEoq IBzi1' (dEQQI) o1d51 oii2 bns mEilliW hsE3 oo o&\o \cio. • On 18qB rn)hoW 5138M • "ñiwrnD 3imono33 briB 3il&1 IB3iI 1) nnbt n&liEnot briE 3n3 nn3 .cimoU 8 coio \wniot , 'tnoiiExsT omo3nI IEnqED 1o ioD 1E118W i1T" (8VQ I) nittEM nieibI&1 .I-2 jiiqA B rffiw mT emo3nI U 8th ni3E!q8W (81) LIEr1W .T briE n3vofI .8 .1. .U noti1hiI voio &I%' ticio.. "xsT noiiqmunoD "noiiudi1iib351 briE noiin3'n3inI 3bi Iqqii Q1) £lUin3V £v3 bris i331sM fl3dIA E233T 1iM-Eb1a) .3nIJ1 1 I •on 1eqE nbhoW wIdE1 iisqrno iBii13vinU b'D 3f1iw8 8th isvo 8miT briE lsiiqsD 1o noii&oIIA 311T" (IQI) siiwo3I3H iv bn msioI. boownee[) •4II-88II 13dfII333U \\ok \co. i\o'1\, tncnot , 'noiqrnu2noD bas gnin1EsJ nirnsH lo IsboM ekD-81iJ A" VI) amt 1Ern±H •4$-I1 i21JuA 8 tjono 8F. uo ry't\ oi0 uonsobn3 lo 2E3boM ni noiixBT fmi1qO" EQ1) koM .3 isiI bn i!fezjnM .3 oIIobo5I .3 ii&1 anot .VR-8t nuI 1O1 \oc\o co. "r1iwoi0 rno3nI IRiiqD b noiixBT IsmiiqO 8th nO" (dQ1) io$1 .3 8q bn iIL8unEM .3 obIoboSI .3 ruJ sno1 .idrn3voM .on iqI nbhoW 5138M ' \cno.. "I3boM j j1oj jhq s1qmi ni noiixT evi1udi1iib351' 8I) .J th3nn3)l bbu1 t3doi3O cw? \o \IO. "13boM fl1isio1 ui8q s ni noiixsT iolo&I 10 toD IbW 3rfT" 81) .J thenn8)I bbu1 .OV-V 1.oM Q \ok • 1iI3vrnU bo1nEi • oernim • "8imono33 3ii2Bf13oi 3imBnU ni noiiBxET IsmiiqO" (OQ I) .J fIi3nn3)l bbu1 -Vt flEff0!131U 3vIsq 3IiT b3) 31BIM .M bns IbwiE3 .1 ni 'noiixT rioi1qmunoD" 8Q1) niIo1 s)I .81 .niwnli bn n3ILA :nobnoJ xT 3w3ibn3gxH ciA (Q1) aIofL3iM iobIs)I 'fl2r8vrnU noiifl8iU .u.nq tx i\ov a (QI) ii- jni)I .ogBairD 10 8th bnB v3ilo oildu'I (.th3) rIiiH .A .0 bns hsH .M .0 cii "noiixT bnB niv (O8Q1) nvieM ni)I .niwnU bnn n3!IA :nobnoj m8i2y xfiT isiboM lo ii218vinU "noiiRxET 1miiqO 'II3imEnU lo &ioii&itqmI &dvi3dO' (1QQ1) .0 rl8do5I nDI .iuuA I3qs nhhoW .1 :8!3D 5ni2u8 bn cliwoiD noii3ubo1q" (881) oIed3SI .T oivo bnB 180jq .1 8hrD .0 tI3doSI gni)I .E-Q1 oio? oI& \swnio. "18boM s3i2sI3o8i4 3is8 dT IE3iJ3I3o3M grnqol8v8U :rIiwrnO 3imOflO33 bits i3iEo 3iEduq" (OQQ1) 018(1351 oigi bits .0 rido51 gni)I .O1-ô1 3dOi3O risq .oM v,coto?t \c %wo. 'noiis3i1qmI iinuoD-oiD :cliwoi0 31m0n033 bits iivil3A 8isg81A noilsxsT" (Q8Q1) Jbnemio)1 .51 bits .51 I3123oA "328cIioqH sbi-1qqu SmOg no 33flsbiV3 tg o%\o o1&' \co. "in3mqoIv8U 31ffl0f1033 lo 3ift(b3M 3111 nO" e(88Q1) .i1 .3 fl8doSI uJ vwk \ici\,.O , "w3iveSl 1s3ii'1snA ifs :2z)imono33 ebi-1qqi" (OQQ!) .iT. .3 rrsdo$I suJ .oI lUOfIiiw mono33 fiR uI iLoq EsiI bus Iat3noM IEmiiqO" (8QI) !oi .J '3ff5M bns .fl .3 rndoH ,s3uJ EQ-U LuI 1 o't4 ' %co. "L1iqsD gnoJ ioI flsM sxsT oU" ,XQI) sA bus ifl I-iLiM siiiM usiD ,.0 3upnn3 &obn8M .snut QV\Q .on ,3qs gniioW IMI , "31u33jnoD 1ithtiuonisqu i3i3thsH ¶rL1woi0 nu5L -oiD :2oimono33ol3sM ui ais5I xT 3vii3efl3" (4Q!) Is3T .J sbnij bus niss5I 1522A , .0 8upim3 sobnoM oo? o't&' %co. ",noilqmu2noD bns 5mo3nI 1oi3EI no ais$I xT lo 2315m1123 iinuoD .Eov ni lRiiqsD 1E3ir1q bus nsmuH 1o noiisxsT IRmiiqO" (s41) iuithjo5l I3huoT4 bus snsM nsiO ifl3r1&I-ie1iM Isdoi3O S88t .on i3qs gnbhoW $13ER4 ,"2l3boM uliwoiO uonobn3 513814 "8imono33 n3qO ni wo0 bus noiisxT" (dtQQ1) iniduoSi L3iwoVl bni siisM nsi0 ifl81&T-ieliM .i3doiaO j88 .on sqsq nthoW u!goo1iut 1Ri3o 01 anoiis3ilggA ii3dl lo emos rbiw ymono33 tsiii1pq b 1gi3nh (8481) iisui nifol q olncuuoT lo iit3vinU :oinoioT nosc1o$1 .M .t cJ .b3 jliM iu1 ndot b atoW bei3eIIoD ui uongobn3 bo 13boM o1-owT flu 3ifflEn'U tsnoiiinsiT" (QI) niflsM-i-sI 3ivsX bnE asD nsgi1IuM .fV-V ,3suuA ,IIIVD oSoZ?t\ \cuot 'uIiwoi0 -\ck \o co. "1siiqsD nsmijH dliw I3boM s ni uliwoiD huB 31ui3Ini xsT" ,(E1) Ius onhos ono?t \ok o %cno. "cf1woi0 ni51-gnoJ bris 2isIsaA 3jjoq nuSI-gnoi (IQQf) oigi oI3de51 .!-OO ,3f11J1 %I;c W%OZ \w onX' 13do13O 4Q ctosso ' co. • "iUwoiO nij5I-noJ bnB rruns5I gnineionI' 8QI) !un iemo5l .voi-ooi \ go. xsT lfimiiqO bnB noiiBxBT IsmiiqO' (OQ!) Iso1 bo1m81 .8V-V! I3iniW \co. "noifixET Io iorrI' cfi ni m31do 3mo :3irnBrnU 1i oi 3iiEi sri moiI" I) Iorbi14 rns .VQ-FV rI3iBM i\o1\o • "sxT 8iE5I-BPI b is1b iliwoiD" Q1) oIscJe5I oiis brIE .J n4 sAoi .O-!4 snu1 O! iK "bboM iliwoiD sI3D-s1!J B ii noiiBIumu33A bns noilsxsT lsiiqED" ,(I8I) .H sxrsiwsJ ,ismmu isdmsiqs 1V wi cno. ILioiv&138 riiv bus noiiqmuanoD iii inisii&ioD smiT" (EQQt) se .H I IswoH bus oliV ixusT %OO?i c ,IiiqA jOt \o1\o \tvo. "1sliqsD nsmijH no noilsxsT lo iT1 silT" Q!) qillirl 1sloT 1o iiisvinU osmim ,"iliwoiO 3!mono3H bus noiistumu3A IsiiqsD nsmuH ,noilsxsT" (OQQ1) sW-irfl jisuY .o3irD
188872
https://opencw.aprende.org/courses/chemistry/5-62-physical-chemistry-ii-spring-2008/lecture-notes/
Home » Courses » Chemistry » Physical Chemistry II » Lecture Notes Lecture Notes Course Home Syllabus Calendar Readings Lecture Notes Exams Download Course Materials Lecture notes files. | LEC # | TOPICS | LECTURE NOTES | | 1 | Review of thermodynamics | (PDF) | | 2 | E, A, and S: macroscopic properties for microscopic probabilities {Pi} | (PDF) | | 3 | Canonical partition function: replace {Pi} by Q | (PDF) | | 4 | Microcanonical ensemble: replace {Pi} by Ω, Q vs. Ω | (PDF) | | 5 | Molecular partition function: replace E (assembly) by ε (molecule) | (PDF) | | 6 | Q corrected for molecular indistinguishability | (PDF) | | 7 | Translational part of Boltzmann partition function | (PDF) | | 8 | Boltzmann, Fermi-Dirac, and Bose-Einstein statistics | (PDF - 1.1 MB) | | 9 | Calculation of macroscopic properties from microscopic energy levels: qtrans | (PDF) | | 10 | Quantum vs. classical qtrans Equipartition Internal degrees of freedom | (PDF) | | 11 | Internal degrees of freedom for atoms and diatomic molecules | (PDF) | | 12 | Rotational partition function Equipartition | (PDF) | | 13 | Nuclear spin statistics: symmetry number, σ Low temperature limit for rotational partition function | (PDF) Supplement (PDF) | | 14 | Low and high-T limits for qrot and qvib | (PDF) | | 15 | Polyatomic molecules: rotation and vibration | (PDF) | | 16 | Chemical equilibrium I | (PDF) | | 17 | Chemical equilibrium II | (PDF) | | 18 | Model intermolecular potentials | (PDF) | | 19 | Configurational integral: cluster expansion | (PDF) | | 20 | Virial equation of state | (PDF) | | 21 | Thermodynamics of solid: Einstein and Debye models | (PDF) Supplement (PDF) | | 22 | Einstein and Debye solids | (PDF - 1.4 MB) | | 23 | Phonons: 1-D linear chain of atoms | (PDF) | | 24 | Free electron theory of a metal | (PDF) | | 25 | Heat capacity in metals | | | 26 | Band theory of solids | (PDF) | | 27 | Crystal phase equilibria | | | 28 | Kinetic theory of gases: Maxwell-Boltzmann distribution | (PDF) | | 29 | Kinetic theory of gases: effusion and collisions | (PDF - 1.0 MB) | | 30 | Kinetic theory of gases: collision dynamics and scattering | (PDF) | | 31 | Kinetic theory of gases: mean free path and transport | (PDF) | | 32 | Kinetic theory of gases: transport coefficients | (PDF) | | 33 | Transition state theory I | (PDF) | | 34 | Transition state theory II Kinetic isotope effect | (PDF) | | 35 | Statistical mechanics for photons | | | 36 | Rates of unimolecular reactions: RRKM | (PDF) |
188873
https://ghoshadi.wordpress.com/wp-content/uploads/2017/06/primorials_aditya-ghosh.pdf
A Lower Bound on Primorials and the Common Difference of Arithmetic Progression of Primes Aditya Ghosh , India April 10, 2017 1 Abstract Prime numbers in arithmetic progressions had been the subject of interest for many years. Here I derive a lower bound on the common difference of such an arithmetic progression. Then I give a lower bound on the product of primes i.e the primorials. 2 Notations Suppose 2 = p1, 3 = p2, p3, . . . is the sequence of prime numbers. The primorial function is defined as n# = Q p≤n p. The Chebyshev function is defined as θ(n) = log(n#). The prime counting function is π(x) = number of primes ≤x. All over here log n is the natural logarithm of n. And log2 n is meant to be= (log n)2, not log log n. 3 The Results Using elementary methods, Bonse proved that p1p2 . . . pn > p2 n+1∀n ≥4 and p1p2 . . . pn > p3 n+1∀n ≥5. Without restriction of elementary methods, L. P´ osa proved that, given any k > 1, there exists an nk such that, p1p2 . . . pn > pk n+1∀n ≥nk. In 2000, L.Panaitopol gave another excellent bound, p1p2 . . . pn > pn−π(n) n+1 ∀n ≥2. I was actually working with the common difference of arithmetic progressions of primes. Suppose p, p + d, p + 2d, . . . , p + (n −1)d are n primes in A.P.(n ≥4). Using Panaitopol’s inequality I arrived at the following lower bound on d : If pm ≤n−1 < pm+1, then d > nm−π(m). Then, modifying Panaipotol’s proof , I arrived at this stronger bound on primorials : Given any k > 1, there exists an nk such that,∀n ≥nk , p1p2 . . . pn > pn−π(n)+k n+1 . 1 4 The Proofs First let us prove the bound on the common difference d of an arithmetic progression of primes. Let An = {p, p + d, p + 2d, . . . , p + (n −1)d} be an arithmetic progression of n primes. We claim that (n −1)#|d. If this is not the case, then there exists prime q < n which does not divide d. Consider Aq = {p, p + d, p + 2d, . . . , p + (q −1)d}. Now, if ∃i, j (0≤i < j ≤q −1) : p + id ≡p + jd (mod q), then we get q|(j −i)d ⇒q|j −i (since q ̸ |d). But this is impossible in light of 0 < j −i < q −1. So the elements of Aq are congruent to distinct residues modulo q. Since Aq has q elements, so there exists some p + kd ∈Aq which is divisible by q. But, p + kd ≥p ≥n > q, [ Note that n ≤p, otherwise (p + pd) ∈An but its not a prime.] hence p + kd is composite, a contradiction. Now, say pm ≤n −1 < pm+1. And n ≥4 ⇒m ≥2. So using the aforesaid inequality of Panaitopol, we have d ≥(n −1)# = pm# > pm−π(m) m+1 ≥nm−π(m)(Proved). Next we prove the lower bound on primorial function. First we prove the following lemma, Lemma: For all n ≥59, we have, log pn+1 < log n + log log n + log log n −0.4 log n Proof: We use a result due to Rosser and Schoenfeld, pn ≤n(log n + log log n −1/2) (∀n ≥20) . . . (1) From log(1 + x) < x for x > 0, we get for x = 1/n that, log(n + 1) < log n + 1 n. We also get that, log log(n + 1) < log(log n + 1 n) = log log n + log(1 + 1 n log n) < log log n + 1 n log n. Hence, along with (1), we obtain, log pn+1 < log(n + 1) + log(log(n + 1) + log log(n + 1) −1/2) < log n + 1 n + log log n + log(1 + 1 n log n + log log n log n + 1 n log2 n − 1 2 log n) < log n + 1 n + log log n + log log n log n + 1 n log n + 1 n log2 n − 1 2 log n So it remains to show that, 1 n + log log n log n + 1 n log n + 1 n log2 n − 1 2 log n < log log n −0.4 log n or, log n n + log log n + 1 n + 1 n log n −1 2 < log log n −0.4 or, log n + 1 n + 1 n log n < 0.1 2 The LHS is a decreasing function of n, and at n = 59(> e4), it is < 0.1. So the above inequality holds ∀n ≥59. Next we use another estimate due to Rosser and Schoenfeld, π(x) > x log x + x 2 log2 x (∀x ≥59) . . . (2) and an estimate of θ(pn) due to G.Robin, θ(pn) > n(log n + log log n −1 + log log n −2.1454 log n )(∀n ≥3) . . . (3) For n ≥59, we use (2) and the aforesaid lemma to obtain, n(1 − 1 log n − 1 2 log2 n + k n)(log n + log log n + log log n −0.4 log n ) > (n −π(n) + k) log pn+1 . . . (4) Now, we wish to show that, given any k , θ(pn) > (n −π(n) + k) log pn+1 holds ∀n ≥ some nk. So, in view of (3) and (4) , it suffices to show that, (1 − 1 log n − 1 2 log2 n + k n)(log n + log log n + log log n −0.4 log n ) < (log n + log log n −1 + log log n −2.1454 log n ) or, k n(log n + log log n + log log n −0.4 log n ) −log log n log n −log log n 2 log2 n − 1 2 log n + log log n −0.4 log n (− 1 log n − 1 2 log2 n) −0.4 log n < −2.1454 log n or, 1.2454 log n + k n log log n + k n(log log n −0.4 log n ) + k n log n < log log n log n + 0.5 (log n)2 log log n + 1 log n(log log n −0.4 log n ) + log log n −0.4 2(log n)3 The last inequality holds for all n after some cut-offnk. In fact, each of the following inequalities, 1.2454 log n < log log n log n , k n log log n < 0.5 (log n)2 log log n , k n(log log n −0.4 log n ) < 1 log n(log log n −0.4 log n ) , k n log n < log log n −0.4 2(log n)3 , holds after a certain [depending on k] cut-offnk for n. Thus our required inequality also holds for all large enough n. Hence our proof is complete. 3 5 References The references ,,, given below, are taken from . H.Rademacher, O.Toeplitz : The enjoyment of mathematics. Princeton Univ. Press, 1957. L.P´ osa : ¨ Uber eine Eigenschaft der Primzahlen (Hungarian). Mat. Lapok 11(1960), 124-129. L.Panaitopol: An Inequality Involving Prime Numbers. Univ. Beograd. Publ. Elek-trotehn.Fak. Ser.Mat.11(2000), 33-35. J.B.Rosser, L.Schoenfeld : Approximate formulas for some functions of prime num-bers. Illinois J.Math. 6(1962), 64-89. G.Robin : Estimation de la fonction de Tschebyshev θ sur le k-i` eme nombre premier et grandes valeurs de la fonction ω(n), nombre des diviseurs premier de n.Acta. Arith. 43(1983), 367-389. 6 About the Author The author is a student of 12th standard in RKM Boys’ Home High School , Rahara, Kol-113. He is a resident of Kolkata, West Bengal, India. 4
188874
https://www.youtube.com/watch?v=X1jwiuI0zDc
OpenStax Chemistry 2e Chapter 3 Section 3 Michael Stogsdill 363 subscribers 5 likes Description 303 views Posted: 25 Sep 2023 This video will review OpenStax Chemistry 2e Chapter 3 Section 3 - Molarity. Transcript: hello everybody in this video we're going to cover section 3.3 of our textbook where we're going to describe the fundamental properties of solutions we're going to calculate solution concentrations using molarity and we're going to perform dilution calculations using the dilution equation so so far we've been talking a lot about the composition of pure substances however there are very few pure substances that we interact with on a regular basis typically we're going to see mixtures which is where we're going to have one or more pure substances mixed together all right but similar to pure substances we still need to have some sense of the relative composition of a mixture because it's going to play an important role in determining its properties all right so knowing how much of each different pure substance is in the mixture is going to determine what that mixture looks like and how it behaves um so there's a lot of different examples of mixtures one example would be like coffee and coffee we have water which is a pure substance we have all of that plant extract from the coffee that's just a whole bunch of big uh a whole bunch of different kinds of pure substances we may add sugar to it and dissolve that in there now that's a component of that mixture um and so even just something as simple as your morning coffee is really a very rich uh system uh mixture of many different kinds of pure substances there is a specific uh type of mixture that chemists are often very interested in talking about especially initially because we have a lot of good math that allows us to treat this type of mixture and they're called Solutions Solutions occur frequently in nature and they are basically a homogeneous mixture and if we go back to the definition of a homogeneous mixture this means that it has a uniform composition and it's the same properties throughout its entire volume and chemists really like this type of behavior right that means that I can take whatever sample I want from the homogeneous mixture and I know it's going to be the exact same as any other sample that I take from it um the relative amount of a given solution component is known as its concentration okay so if I know the relative amount not the total amount but the relative but the amount that I have relative to the rest of the mixture all right that's its concentration we're going to talk a little bit more about what the components in the language are around Solutions so basically a solution consists of two components we have the solvent this is the component with a concentration that is significantly greater than all all the other components so the majority of your mixture whatever pure substance is the majority of your mixture is going to be the solvent going back to our example of coffee our solvent would have been water because if we analyzed it we would see that we have more water molecules we have more water by mass than any other component in that mixture a solute is a component that is typically present at much lower concentration than the solvent so the things that we put into our solvent are our solutes all right a solution is only going to have one solvent because it's the one component that's present in a larger concentration than all the others okay but it may have many different solutes dissolved in it a solution in which water is the solvent is called an aqueous solution aqueous meaning water all right so an example of an aqueous solution might be distilled white vinegar okay this is actually acetic acid dissolved in water and you can often see that on the label here it says reduced with water to five percent acidity that means at five percent by mass of it is acetic acid and the rest of the 95 percent of that is just water and you can see if you've ever worked with white vinegar it has very different properties from water it's a very pungent smell very pungent taste and that's only from the addition of five percent a relatively small amount of this acetic acid so now we're going to talk about molarity and molarity is a very important type of concentration that we're going to see over and over again when we're um uh working this year uh in this term and um it's going to come up over and over again especially if you continue on in chemistry so it's very important that you understand the idea behind molarity all right um molarity has a unit and it is just this capital M all right which could be very confusing with things like moles and stuff like that all right so it's very important for you to recognize that capital M is going to be molarity all right and moles when you write out moles and you give a unit and stuff should always be written out m-o-l-s or m-o-l-e-s or something like that you shouldn't just give it a capital M or a lowercase M those actually have different meanings in chemistry and what a molarity is it's the number of moles of solute okay so one of our components in exactly one liter of the solution all right not one liter of solvent but one liter of the solution the total solution so capital M is going to equal the moles of solute divided by the liters of solution molarity is very useful as I've alluded to and it's used in many applications in chemistry so let's take a look at an example of a molarity calculation a 355 milliliter soft drink sample contains 0.133 mole of sucrose table sugar what is the molar concentration of sucrose in the beverage so a couple of things here first off we need to pay attention to the units that we have for our numbers so we see 355 milliliters milliliters is a unit of volume so I know that this is going to be there my volume for my calculation I also know that I don't want it in milliliters I want that in liters so I'm going to need to make a conversion in my denominator here in order to ensure that I have liters in the denominator the second is that I was given moles directly instead of mass so I don't have to do any conversions with that to create to make that into moles so that can go directly into my numerator the final thing um is that the word molarity never really appeared in this question is that I was asked for the molar concentration so when you see language like that you should be king into in that that's going to be molarity and I need to have my moles per liter of solution I plug in all my values and I do this multiplication and then this Division and I would see that I get 0.375 molar uh sucrose in my soft drink so dilution is really important in chemistry um it's fairly difficult to take a solution and make it more concentrated unless you actually have pure solute to add but it is fairly easy to add more solvent usually and make a solution less concentrated and this is typically what is done a lot of times chemist is going to prepare a stock Solution that's more concentrated than any solution he's going to need and then he just adds solvent as needed to produce the solutions that of specific concentrations in this image we can see uh kind of the effects of dilution so both of these different solutions contain the exact same amount of copper nitrate the blue stuff but one of them is more concentrated and therefore has a darker color than the other one that is more dilute and the more dilute one was made by adding more water in this case more solvent to dilute it down the important thing that we need to be able to do is to calculate what the concentration is going to be after we do a dilution um and basically we can derive a formula by starting off by thinking about we're going to limit ourselves to molarity here but the molar amount of the solute n so the number of moles that I have all right is going to be equal to the concentration so that's moles per liter times the liters of solution of of that solution that I have okay so if I know the concentration of a solution I multiply that by the volume of that solution then I can figure out how many moles all together of a solute I have we can write an expression like this for the two conditions one before we've made any dilution so when I have my more concentrated solution and then in second expression for when after I've done that dilution I have my uh more dilute solution so the important thing to realize is that when I'm diluted I haven't reduced down the amount of solute at all I'm going to N1 is going to equal N2 okay which means that I can make a substitution into this uh equation where M1 L1 is equal to M2 L2 so the concentration of my initial solution times the volume of that solution that I diluted is going to equal the new concentration of the solution times the volume that I wound up with okay uh we showed this for the case of molarity where we had molar concentrations and volume but this can actually be generalized to any concentration unit all right and we're going to talk about some alternative concentration units in a little while so in general it's always going to be true that you're going to have C1 V1 equal to C2 V2 this is going to be our dilution equation let's take a look at an example of a calculation like that and a lot of students are going to shut down when they kind of see one of these kind of questions just because there's a lot of numbers thrown at you an important thing is to really just remember that it's a fairly simple equation all right and to just make sure that you are matching them up appropriately when you're plugging them into the equation so let's take a look at this a 0.850 liters of five molar solution of copper nitrate this stuff right here is diluted to a volume of 1.8 liters by the addition of water what is the molarity of the diluted solution first thing I'm going to notice is that I was already given it in this concentration unit is in molarity so I don't have to make any adjustment to that to get molarity out all right so the concentration unit you're going to put in is going to be the same as the concentration unit that you get out I'm going to notice that I have 0.850 liters of this copper nitrate so those two are going to get matched together here all right so I'm going to have 5 molar 0.85 liters of solution is going to go there on the other side I'm being asked for the concentration so that's C2 is what I'm going to want to solve for all right and how much of the C2 do I have I have 1.8 liters right here so I plug it I think about plugging those in and then I solve for C2 because that's what I'm going to be asked for and I get this expression right here and I think that this is one of the best places to really check yourself when you're plugging into this equation here okay because if you think about it you're diluting this solution meaning that your concentration should be going down all right so you had a choice I could have put 1.8 liters up here 0.85 liters down here or I could put 1.8 liters up here and 0.85 liters up here I need to have the smaller volume in the numerator and the larger volume in the denominator so that when I do this math I wind up with a smaller concentration okay so that's a nice little check for you if you think about what's happening in the problem to know that you've placed those volumes in the right place you can also know what I said before about the concentration because these leaders canceled you're going to wind up with uh molarity in the end now it was necessary it would be necessary that if these weren't both in liters okay to make sure that they did have the same unit and while it is important to have them in liters when you're calculating molarity it's actually not important to do that in the dilution calculations okay this could be a milliliters and this could be in milliliters just so long as though they had the same volume unit and they're canceling it'll still work out numerically to be the same thing I do always suggest that you do just convert everything to mil to liters right off the bat because it's just the safer route um and it's going to work for all of the problems but as long as those units are canceling here at this this type of problem will work out
188875
https://smartachievers.online/public/study_materials/jee-maths-properties-of-triangles.pdf
Mathematics Properties of Triangles 1 Properties of Triangles Single Correct Answer Type 1. If a,b,c be the sides of a triangle ABC and the roots of the equation a(b – c)x2 + b(c – a)x + c(a – b) = 0 are equal, then 2 2 2 A B C sin ,sin ,sin 2 2 2                  are in (A)AP (B) GP (C) AGP (D) HP Key. D Sol. Q a(b – c) + b(c – a) + c (a – b) = 0  x = 1 is a root of the equation a(b – c)x2 + b(c – a)x + c(a – b) =0 Then, other root = 1 (Q roots are equal)  c(a b) a(b c) − = −  ab – ac = ca – bc  2ac b a c = +  a, b, c are in HP Then, 1 1 1 , , a b c are in AP.  s s s , , a b c are in AP  s s s 1, 1, 1 a b c − − − are in AP.  ( ) ( ) ( ) s a s b s c , , a b c − − − are in AP. Multiplying in each by ( )( ) abc (s a) s b s c − − − Then ( )( ) ( )( ) ( )( ) bc ca ab , , s b s c s c s a s a s b − − − − − − are in AP.  ( )( ) ( )( ) ( )( ) s b s c s c s a s a s b , , bc ca ab − − − − − − are in HP. Or 2 2 2 A B C sin ,sin ,sin 2 2 2                   are in HP 2. Given in : 1 : 2 ABC AB cm AC cm  = = The lengths of external angular bisectors of angles A & C are equal. ie., ' ' AA CC = . If BC 1 then BC = ____ Mathematics Properties of Triangles 2 In the given figure = 0 2 90 A − and 0 2 90 C = −     A B C A' C' (a) 1 15 2 + (b) 1 13 2 + (c) 1 17 2 + (d) 1 19 2 + Key. C Sol. Length of external angular bisector of angle A is 2 sin 2 bc A b c − . Length of external angular bisector of angle C is 2 sin 2 ab C a b − 3. InABC, the bisector of the angle A meets the side BC at D and the circumscribed circle at E, then DE equals (A) ) ( 2 2 sec 2 c b A a + (B) ) ( 2 2 sin 2 c b A a + (C) ) ( 2 2 cos 2 c b A a + (D) ) ( 2 2 cosec 2 c b A a + Key. A Sol. AD.DE = BD.DC . 2 .cos 2 ac ab BD DC b c b c DE bc A AD b c       + +    = = + ( ) 2 sec 2 2 a A b c = + Mathematics Properties of Triangles 3 4. In  ABC, If A – B = 120° and R = 8r, then the value of C C cos 1 cos 1 − + equals (All symbols used have their usual meaning in a triangle) (A) 12 (B) 15 (C) 21 (D) 31 Key. B Sol. cos cos cos 1 r A B C R = + + − 1 2cos cos 1 cos 8 2 2 A B A B C + − = + −+ 2 1 sin 2sin 8 2 2 C C  = − 1 sin 2 4 C  = 1 7 cos 1 8 8 C  = − = 5. In a 0 2 3 2 1 , 30 2 3 2 1 b ABC if A and c + + −  = = + − + , then the measure of , C is  A) 0 1 67 2 B) 0 1 22 2 C) 0 1 52 2 D) 0 1 97 2 Key. C Sol. use cot tan ; 2 2 b c A B C b c − −   =   +   and 0 150 B C + = 3 2 1 3 2 3 3 2 4 2 3 2 2 2 2 1 b c b c b c b c b c + − + + =  =  = − + + + − + − − 2 1 2 3 b c b c − −  = + + which gives b c 1 cot15 tan22 b c 2 −  = + 0 0 45 ; 150 B C B C − = + = Mathematics Properties of Triangles 4 6. In 2 , cos sin 0, cos sin a b ABC if A A then B B c +  + − = + is equal to A) 2 B) 1 C) 1 2 D) 2 2 Key. A Sol. given ( )( ) cos sin cos sin 2 A A B B + + = ( ) ( ) cos sin 2 A B A B − + + = ( ) ( ) cos A B 1;sin A B 1  − = + = ; 2 2 A B A B C   = + =  = 2 a b c +  = 7. In a ABC  , 6 sin 2 5 A =  and 1 9 II =  where 1 2 3 , , I I I are external and I is incentre, then circum radius R= A) 15 2 B) 15 4 C) 15 8 D) 1 3 Key. C Sol. 1 6 15 4 sin 9 4 2 15 4 8 5 24 R A II R R =  =  =  =   8. Let there exist a unique point P inside a ABC  such that PAB PBC PCA   =  =  = If PA=x, PB=y, PC=z, =area of ABC  and a,b,c are the sides opposite to the angles A,B,C respectively, then tan  is equal to A) 2 2 2 4 a b c + +  B) 2 2 2 2 a b c + +  C) 2 2 2 2 a b c  + + D) 2 2 2 4 a b c  + + Key. D Sol. 2 2 2 4 cot cot cot cot tan A B c a b c    + + =  = + + 9. In a triangle ABC with usual notations, if 1 1, 7 r r = = and R = 3,then the triangleABC is A) equilateral B) acute angled which is not equilateral C) obtuse angled D) right angled Mathematics Properties of Triangles 5 Key. D Sol. 2 2 1 A A 1 r r 4Rsin sin A 2 2 2 2  −=  =  = 10. In a triangle ABC, : : 4:5:6 a b c = . The ratio of the radius of the circumcircle to that of the incircle is A) 15/4 B) 11/5 C) 16/7 D) 16/3. Key. C Sol. 4 5 6 a b c = = use 4 abc rs R  = 11. In triangle ABC , then b= 1) 16 2) 20 3) 24 4) 28 Key. 1 Sol. 12. If in a triangle ABC, then 1) 2) 3) 4) Key. 1 Sol. 13. ABCD is a quadrilateral, AB=a, BC=b , CD=c , DA=d , is inscribed to a circle and circumscribed to another circle. Then the value 2 tan 2 A = Mathematics Properties of Triangles 6 1) ad bc 2) ab cd 3) bc ad 4) ac bd Key. 3 Sol. 1 cos 1 bc ad bc ad A bc ad bc ad − − = = + + 14. In a triangle ABC, C=600 and R=16 then I I3= 1) 30 2) 31 3) 32 4) 34 Key. 3 Sol. I I3 = 4R sin 15. In a triangle ABC, r = 2, and then r1 = 1) 2) 2 3) 3 4) 4 Key. 2 Sol. 16. If a,b,c are the sides of a triangle, then the minimum value of 2 2 2 a b c b c a c a b a b c + + + − + − + − is 1) 3 2) 6 3) 8 4) 1/8 Key. 1 Sol. 3 3 9 6 a b c s s s s a s b s c s a s b s c   + + = −+ + + −+ =   − − − − − −   ( ) 1 2 3 1 2 3 1 1 1 9 x x x x x x     + + + +            Q 17. If x,y,z are the distances of the vertices of triangle ABC from its orthocenter then x+y+z= 1) 2 (R + r) 2) 2 (R-r) 3) 2R-r 4) 2R + r Key. 1 Sol. X =2 R cos A , y=2R cos B, z=2R cos C Mathematics Properties of Triangles 7 18. If in a triangle the ex-radii r1,r2,r3 are in the ratio 1:2:3, then their sides are in the ratio : 1) 5:8:9 2) 1:2:3 3) 3:5:7 4) 1:5:9 Key. 1 Sol. , 19. If length of the sides of a triangle ABC are 3,4 and 5 cm, then distance between its orthocentre and circumcentre is 1) 2.5 c.m. 2) 2 c.m. 3) 1.5 c.m. 4) 8 Key. 1 Sol. O1= = R= 2.5 20. If length of the sides of a triangle ABC are 3,4 and 5 cm, then distance between its incentre and circumcentre is 1) 2) 3) 4) Key. 2 Sol. 21. If P is a point on the altitude AD of the triangle ABC such that 3 B DBP  = , then AP is equal to A) 2 sin 3 C a B) 2 sin 3 C b C) 2 sin 3 B c D) 2 sin 3 C c Key. C Sol. 3 B DBP  = 3 B DBP  = Mathematics Properties of Triangles 8 2 3 B ABP  = 2sin 3 sin sin 90 3 3 AP c B AP C B B   =  =        +     22. In triangle ABC, if B = 900 then 1 1 3 cos R r r − =   +   1) 2) 3) 4) Key. 3 Sol. 2 1 3 4 cos 2 B r r R + = 23. A circle is inscribed in an equilateral triangle of side 6 units. The area of any square inscribed in this circle is 1) 6 2) 36 3) 9 4) 72 Key. 1 Sol. Let r be radius of in circle and x be side of the square 3 r = 2 2 3 x = 2 4 3 6 2 x   = 24. If the area of triangle ABC is 2 2 ( ) b c a − − , then tan B = Mathematics Properties of Triangles 9 1) 3 4 2) 1 4 3) 8 15 4) 15 8 Key. 3 Sol. ( ) 2 2 2 2 2 2 b c a b c a ac = − − = − − + ( ) 1 2 2 2 1 2 1 cos 2 a c b ac ac B ac   + − = − = −     2 1 2 .2sin tan 4 2 2 4 abc B B ac R =  = 2/ 4 8 tan 1 1/16 15 B  = = − 25. If in a triangle ABC, (r2-r1) (r3-r1) =2r2r3, then the triangle is : 1) Right angled 2) Isosceles 3) Equilateral 4) Right angled Isosceles Key. 1 Sol. 26. If r1,r2,r3 are exradii of any triangle then r1r2 + r2r3 + r3r1 is equal to : 1) 2) 3) 4) Key. 2 Sol. 27. If in a triangle ABC, 2 3 1 1 1 1 1 2 , a p q r r r r     = + + −         then p+q= 1)  2) 2 3) 3 4) 4 Mathematics Properties of Triangles 10 Key. 2 Sol. 28. In a triangle, if 1 2 3 2 3 , a b c r r r then b c a = = = + + = 1) 2) 3) 4) Key. 4 Sol. 29. In a triangle ABC, medians AD and CE are drawn . If AD=5, 8 DAC   = and 4 ACE   = then the area of triangle ABC is equal to 1) 25 9 2) 25 3 3) 25 18 4) 10 3 Key. 2 Sol. 2 10 , 3 3 AG AD = = sin 10 8 3 sin sin sin 8 4 4 GC AG GC     =  =   Area fo 3 ABC  = Area of AGC  sin 1 10 10 25 8 3 sin 2 3 3 2 8 3 sin 4                  + =                     Mathematics Properties of Triangles 11 30. In a triangle ABC, then the value of 1) 18 2) 81 3) 72 4) 27 Key. 2 Sol. 31. If in a triangle ABC r1=3, r2=10, r3=15 then the value of R equals 1) 2) 3) 4) Key. 4 Sol. 32. In a triangle ABC, the maximum value of tan 2 A tan 2 B tan 2 C is 1) 2 s R 2) 2 R s 3) 2 s r 4) 2 r s Key. 2 Sol. ( ) ( ) ( ) tan tan tan . . 2 2 2 A B C s s a s s b s s c    = − − − ( ) 2 2 r r R s s  = =   Q 33. In triangle ABC, 1 3 1 cos r r B + = + 1) 2) 3) 4) Key. 2 Mathematics Properties of Triangles 12 Sol. 34. If in a triangle ABC, then C = 1) 2) 3) 4) Key. 4 Sol. , 35. If H is the orthocenter of a acuteangled triangle ABC whose circumcircle is 2 2 16 x y + = then curcumdiametre of the triangle HBC is 1) 1 2) 2 3) 4 4) 8 Key. 4 Sol. since 90 HBC C  = − ( ) 1 2 sin 90 HC R c = − 1 2 cos 2 2 cos R c R R c  = = 36. In triangle ABC , I is the incentre of the triangle . Then IA.IB.IC = 1) 4r2R 2) 4R2r 3) r2R 4) R2r Key. 1 Sol. IA.IB.IC = r cosec A/2. r cosec B/2. r cosec C/2 37. In a right angled triangle ABC with 2 A  = , a circle is drawn touching the side AB,AC and incircle of the triangle. It’s radius is equal to Mathematics Properties of Triangles 13 1) ( ) 2 2 r − 2) ( ) 3 2 r − 3) ( ) 3 2 r + 4) ( ) 3 2 2 r − Key. 4 Sol. let 1 r be radius of required circle 1 1 1 0 cos 2 2 A A r ec r = = 0 cos 2 2 A A r ec r = = ( ) 1 1 00 2r r r = − But 1 1 00 r r = + ( ) ( ) ( ) ( ) 1 1 1 2 1 2 3 2 2 2 1 r r r r r r r − + = −  = − + Q 38. Let S1 and S2 be the areas of inscribed and circumscribed polygons of n sides respectively and S3 is the area of regular polygon of 2n sides inscribed in a circle, then A) 2S3 = S1 + S2 B) 2 3 1 2 S S S = C) 3 1 2 1 1 1 S S S = + D) 3 1 2 2 1 1 S S S = + Key. B Sol. O tan x n r = tan x r n  = Mathematics Properties of Triangles 14 2 1 1 2 sin 2 S n r n  =    2 2 . tan S n r n  = 2 3 2 sin 2 n S r n  =  2 2 4 2 3 sin S n r n  = 2 4 1 2 sin 1 2sin cos . 2 cos n S S n r n n n    =   2 4 2 2 3 sin n r S n  = = 39. In ABC if sin sin sin sin C c b a b ab ac bc A B B c = + + + + then angle A is A) 1200 B) 900 C) 600 D) 300 Key. B Sol. 2 2 a b c c b a bc Rc Rb ab ac bc + + = + + 0 2 90 R a A  =  = 40. In ABC, 2 , 3 3 3 A b c  = − = cm and area of ABC = 2 9 3 2 cm , then BC = A) 6 3 cm B) 9cm C) 18cm D) 27cm Key. B Sol. 2 2 2 2 1 2 9 3 sin 18 36 27 63 2 3 2 bc bc b c b c =  =  + − =  + = 2 1 63 2 18 81 9 2 a a − = −  =  = 41. In ABC, if 3 cot ,cot ,cot c a A ac B C a c = = = then which of the following can be true? A) 2 1 a a c + = − B) 2 1 a a c + = + Mathematics Properties of Triangles 15 C) 2 2 a a c + = − D) 2 2 a a c + = + Key. A Sol. 2 cot cot , cot cot , cot cot A B C B C a C A a = = = But 2 2 cot cot 1 1 1 A B c a a a a c = + + =  + = −  42. Let AD be a median of ABC. If AE and AF are medians of ABD and ADC respectively and AD = m1, AE = m2, AF = m3, BC = a, then 2 8 a = A) 2 2 2 2 3 1 2 m m m + − B) 2 2 2 1 2 3 2 m m m + − C) 2 2 2 1 3 2 2 m m m + − D) 2 2 2 1 2 3 m m m + + Key. A Sol. ( ) 2 2 2 2 2 2 2 2 2 3 1 2 3 1 2 2 8 a m m m ED m m m + = +  + − = A B C D E F 1 m 2 m 3 m 43. In ABC, 3 A  = and its inradius is 6 units. The radius of the circle touching the sides AB, AC internally and the incircle of ABC externally is A) 3units B) 3/2 units C) 2 units D) 4 units Key. C Sol. Angle between the direct common tangents is 3  A B C 3  Mathematics Properties of Triangles 16 1 6 6 1 2sin 6 3 6 2 r r r r − −  −    =  =   + +   12 2 6 6 3 2 r r r r  − = +  =  = . 44. Three positive real numbers x, y, z satisfy the equations 2 2 3 25, x xy y + + = 2 2 9 y z + = and 2 2 16 x xz z + + = then the value of 2 3 xy yz xz + + is A) 18 B) 24 C) 30 D) 36 Key. B Sol. 4 5 3 x y z 1200 900 1500 Area of triangle = 1 1 3 1 1 1 3 4 2 2 2 2 2 2 xz xy yz  = +  + 24 3 2 xz xy yz  = + + 45. Let ABC be a triangle with 2 3 BAC   = and AB = x such that AB.AC=1. If x varies then the largest possible length of internal angular bisector AD is A) 1 B) 2 C) 1 2 D) 1 4 Key. C Sol. A B C x 2 3  Angular bisector 2 cos 2 bc A AD b c = + 1 2 1 1 2 x x x x  =  + Mathematics Properties of Triangles 17 46. The sides of a triangle inscribed in a given circle subtend angles , , at the centre. Then, the minimum value of the A.M of cos ,cos , cos 2 2 2          + + +             is (A) 3 2 − (B) 3 2 (C) 1 2 (D) none of these Key. A Sol. Clearly, A , B , C 2 2 2     =  =  = 2 ++ =  1 A.M. cos cos cos 3 2 2 2           = + + + +                   1 sin sin sin 3 = − + +  4 sin sin sin 3 2 2 2          = −             4 sin Asin BsinC 3 = − A.M. will be least if sin sin sin 2 2 2                      is greatest i.e. sinA sinB sinC is greatest, we know that in a ABC, sin A sin B sin C is greatest if A B C 3  = = = 3 4 3 3 Least A.M. 3 2 2    = − = −       47. In the triangle ABC the medians from B and C are perpendicular. The value of cot cot + B C cannot be A) 1 3 B) 2 3 C) 4 3 D) 5 3 Key : A Sol. 2 2 2 2 3 2 tan 2 1 2 + = = − − y y xy x x B y x y x A B C E F P x 2x y 2y tan 2 = y PBF x tan = y PBC x Mathematics Properties of Triangles 18 2 2 2 cot 3 − = x y B xy , 2 2 2 cot 3 − = y x C xy 2 2 2 cot cot 3 3 x y B C xy + + =  48. T1 is an isosceles triangle with circumcircle K. Let T2 be another isosceles triangle inscribed in K whose base is one of the equal side of T1 and which overlaps the interior of T1. Similarly create isosceles triangles T3 from T2, T4 from T3 and so on to the triangle Tn. Then the base angle of the triangle Tn as n → is a) 300 b) 600 c) 900 d) 1200 Key : B Sol : T1 is an isosceles triangle with circumcricle K. Let T2 be another isosceles triangle inscribed in K whose base is one of the equal sides of T1 and which overlaps the interior of T1. Similarly create isosceles triangles T3 from T2, T4 from T3 and so on, do the triangles Tn approach an equilateral triangle as n →? Note that the base angle of Tn is equal to the angle opposite the base of Tn+1 (as the figure indicates). Therefore, if  is the base angle for Tn, then the base of angle for the next triangle (Tn+1) is 0 0 180 90 . 2 2 −  = − Suppose, now that is the base angle for T1, then the base angle for Tn is ( ) ( ) n 2 n 1 n 2 n 1 90 90 90 90 90 .... 1 1 . 2 4 8 2 2 − − − −  − + − + + − + + − Note that the limit as n →of the above is 0 90 60 1 1/ 2 = + by formula for the sum of an infinite 49. R is the circum radius of le ABC  whose circum centre is ‘S’. is the circum radius of le SBC  . Then the ratio R :R is a) 1 b) depends upon side BC c) independent of d) depends on c) A is true , R is false d) A is false, R is true KEY : D HINT. a a R , R SinA Sin2A  = = Mathematics Properties of Triangles 19 R 2CosA R  =  50. In a triangle ABC, 0 A B 120 − = and R 8r = then the value of cosC is (A) 1 4 (B) 15 4 (C) 7 8 (D) 3 2 KEY : C HINT : A B C r 4Rsin sin sin 2 2 2 = A B C 1 2sin sin sin 2 2 2 16  = A B A B C 1 cos cos sin 2 2 2 16   − +      − =             1 C C 1 sin sin 2 2 2 16    − =     2 1 C sin 0 4 2    − =     C 1 sin 2 4  = Hence 2 C 1 7 cosC 1 2sin 1 2 2 16 8 = − = −  = 51. In a scalene ABC D , D is a point on the side AB such that 2 CD AD DB =  , if 2 C SinA SinB Sin 2  = then CD is a) Median through C b) Internal bisector of c) Altitude through C d) Divides AB in the ratio 1 : 2 Key : B Sol : Let ( ) ACD DCB C  =  = − Applying the sine rule in ACD  and in DCB  respectively, we get AD CD sin sin A =  and ( ) BD CD sin C sin B = − Mathematics Properties of Triangles 20 ( ) 2 AD BD CD sin sin C sin A sin B   =  −  ( ) ( ) ( ) ( ) 2 2 1 1 C C 1 cos 2 C cosC cos 2 c 1 2sin sin 1 cos 2 C 2 2 2 2 2    − − = − −+ = − − −         since, ( ) 1 cos 2 C 0 − −   2 C sin A sin B sin 2   and equality sign holds, if ( ) 1 cos 2 C 0 − − =  C 2 = That means equality sign holds, if CD is the internal angle bisector of angle C. 52. The perimeter of a triangle ABC is 6 times the arithmetic mean of the sines of its angles. If the side a is 1,then A is a) 6  b) 3  c) 2  d) 2 3  Key: A Hint sin sin sin 2 6 3 A B C s + +   =     53. The radii of the escribed circles of ABC  are ra, rb and rc respectively. If ra + rb = 3R and rb + rc = 2R, then the smallest angle of triangle is a) ( ) 1 tan 2 1 − − b) ( ) 1 1 tan 3 2 − c) ( ) 1 1 tan 2 1 2 − + d) ( ) 1 tan 2 3 − − sol : We have ra + rb = 3R 3abc abc 3r R s a s b 4 4      + = = =   − −      2 (s b s a) 3abc c 3abc 3ab (s a)(s b) 4 (s a)(s b) 4 (s a)(s b) 4  − + −   =  =  = − −  − −  − −  4s (s – c) = 3ab  (a + b + c) (a + b – c) = 3ab  (a + b)2 – c2 = 3ab  a2 + b2 – c2 = ab  c2 = a2 + b2 – ab Mathematics Properties of Triangles 21  a2 + b2 – 2ab cos C = a2 +b2 – ab (As c2 = a2 + b2 – 2ab cos C)  o 1 cos C C 60 2 =  = …………….. (1) Clearly from rb + rc = 2R  2R s b s c   + = − − 2 (2s b c) 2abc 2 bc (s b)(s c) 4 (s b)(s c)  − −   =  = − −  − −  2s (s – a) = bc  (b + c + a) (b + c – a) = 2bc  (b + c)2 – a2 = 2bc Note : Angles A, C, B are in AP can be converted into more than one 54. With usual notations, in a triangle ABC, a cos(B – C) + b cos(C – A) + c cos(A – B) is equal to (A) 2 abc R (B) 2 abc 4R (C) 2 4abc R (D) 2 abc 2R Key. A Sol. Here a(cosB cosC + sinB sinC) + ........ using a b c sinA sinB sinC = = = 2R a (cosB cosC + 2 bc 4R ) + ...... = 2 3abc 4R + a cosB cosC + b cosC cosA + c cosA cosB = 2 3abc 4R + c cosC + c cosA cosB = 2 3abc 4R + c [cosA cosB – cos(A + B)] = 2 3abc 4R + c sinA sinB = 2 3abc 4R + 2 abc 4R = 2 abc R 55. An isosceles triangle has sides of length 2, 2, and x. The value of x for which the area of the triangle is maximum, is (A) 1 (B) 2 (C) 2 (D) 2 2 Key. D Sol. 1 2 2sinA 2  which is maximum if A = 90°  x = 2 2 ] 56, In a  ABC if b + c = 3a then cot B 2 · cot C 2 has the value equal to : (A) 4 (B) 3 (C) 2 (D) 1 Key. C Sol. cot B 2 · cot C 2 = ( ) s s b −  . ( ) s s c −  . ( ) s a s a − − = s s a − = 2s 2s 2a − Mathematics Properties of Triangles 22 but given that a + b + c = 4a  2s = 4a Hence cot B 2 · cot C 2 = 4a 2a = 2 57. Let f, g, h be the lengths of the perpendiculars from the circumcentre of the  ABC on the sides a, b and c respectively . If a b c f g h + + =  a b c f g h then the value of  is : (A) 1/4 (B) 1/2 (C) 1 (D) 2 Key. A Sol. tan A = a 2f  1 2  tan A = 1 2  tan A = 1 a b c . . 4 f g h        A ] 58. In a triangle ABC, R(b + c) = a bc where R is the circumradius of the triangle. Then the triangle is (A) Isosceles but not right (B) right but not isosceles (C) right isosceles (D) equilateral Key. C Sol. R(b + c) = a bc R(b + c) = 2RsinA bc  sin A = b c 2 bc + now applying AM  GM for b and c b c 2bc +  bc ;  b c 2bc +  1 hence sin A  1 which is not possible. hence sin A = 1  A = 90°  A = 90° and b = c  (C) 59. A triangle with integral sides has perimeter 8 cm. Then the area of the triangle, is (A) 2 2 2cm (B) 2 16 3cm 9 (C) 2 2 3cm (D) 2 4 2cm Key. A Sol. Only possibility for the sides can be 3, 3, 2 (think !) 2 A s(s a)(s b)(s c) 4 1 1 2 2 2cm = − − − =  = 60. In triangle ABC, 2 2 2 2002 + = a c b then cot cot cot + = A C B A) 1 2001 B) 2 2001 C) 3 2001 D) 4 2001 Key. B Sol. ( ) 2 sin sin cot cot sin cot sin sin sin sin cos sin + + = = A C B A C B B A C B A B C Mathematics Properties of Triangles 23 2 2 2 2 2 2 2 2 4 2 2 4 cos 2 cos = = = + − R b b b R ac B ac B a c b 2 2 2 2 2 2002 2001 = = − b b b 61. The circle touches the sides BC, CA and AB of respectively at D, E and F. If the lengths BD, CE and AF are consecutive integers then the largest side of the triangle is equal to a) 13 b) 14 c) 15 d) cannot be determined Sol: Let BD = n, CE = n + 1, AF = n + 2. Then BD = BF = n, CE = CD = n + 1, AF = AE = n + 2  = = + = + = + = + a BC 2n 1 , b 2n 3, c 2n 2, s 3n 3 ( )( ) ( ) + + +  = = + 3n 3 n 2 n n 1 r s 3n 3 ( ) ( ) +  =  + =  = n 2 n 4 n n 2 48 n 6 3  the largest side of the triangle is 2n + 3 = 15. 62. In a ABC  , medians AD and BE are drawn. If AD = 4, DAB 6  = and ABE 3  = then the area of ABC  is (A) 64 3 (B) 8 3 3 (C) 16 3 (D) 32 3 3 Key. D Sol. The medians intersect at centroid G with AG = 8 3 (Q AG : GD = 2 : 1) 8 8 cot 2 3 3 3 3   =  = = AGB BG Area of 1 8 8 32 2 3 3 3 9 3  =   = AGB  Area of 32 3 3  = ABC Mathematics Properties of Triangles 24 63. In a triangle ABC, 0 A B 120 − = and R 8r = then the value of cosC is (A) 1 4 (B) 15 4 (C) 7 8 (D) 3 2 Key. (c) Sol. A B C r 4Rsin sin sin 2 2 2 = A B C 1 2sin sin sin 2 2 2 16  = A B A B C 1 cos cos sin 2 2 2 16   − +      − =             1 C C 1 sin sin 2 2 2 16    − =     2 1 C sin 0 4 2    − =     C 1 sin 2 4  = Hence 2 C 1 7 cosC 1 2sin 1 2 2 16 8 = − = −  = 64. In a  ABC the incentre and circumcentre are reflections of each other in side BC. Hence the measure of  BAC (in degrees) is (a) 120 (b) 108 (c) 135 (d) 105 Key. (b) Sol. I A B S C I : the incentre S : the circumcentre  BIC = 900 + A 2 (standard result) and reflex  BSC = 2A   BSC = 3600 – 2A Hence 900 + A 2 = 3600 – 2A 65. ABC is a triangle. Put x = a cos A , y = b cos B , z = c cos C. Mathematics Properties of Triangles 25 x , y, z are the side lengths of a triangle (a) only if  ABC is equilateral (b) only if  ABC is obtuse (c) only if  ABC is a right triangle (d) for any acute  ABC Key. (d) Sol. For any acute triangle ABC , x , y and z are the side lengths of the triangle formed by the feet of the altitudes of  ABC. 66. If ABC is a triangle in which 2  < C <  , then the quantity 2 2 2 + a b c lies in the interval (a) (0, 1 2 ) (b) (1, 3 2 ) (c) ( 3 2 , 2) (d) ( 1 2 ,1) Key. (d) Sol. 2  < C <   2 2 2 2 + − a b c ab = cos C < 0  a2 + b2 < c2  2 2 2 + a b c < 1. Further 2 2 2 2 2 2 2 1 2 2 2 2 + + +                 a b a b c a b c 67. If cos A + cosB + 2cosC = 2 then the sides of the  ABC are in (A) A.P. (B) G.P (C) H.P. (D) none Key. A Sol. cosA + cosB = 2(1-cosC) = 4 sin2 C 2 or 2cos A B 2 + cos A B 2 − = 4sin2 C 2 or cos A B 2 − = 2sin C 2 or 2cos C 2 cos A-B 2 = 4sin C 2 cos C 2 = 2sinC 2sin A+B 2 cos A-B 2 = 2sinC or sinA + sinB = 2sinC  a, c, b are in A.P. 68. Let ABC be a triangle with 2 3 BAC   = and AB = x such that AB.AC=1. If x varies then the largest possible length of internal angular bisector AD is A) 1 B) 2 C) 1 2 D) 1 4 Key. C Mathematics Properties of Triangles 26 Sol. A B C x 2 3  Angular bisector 2 cos 2 bc A AD b c = + 1 2 1 1 2 x x x x  =  + 69. Let I be the incentre of the triangle ABC, where BC BA BI k BC BA + = uuu r uuu r uu r uuu r uuu r then the diameter of the circumcircle of the triangle is (A) k(cos A/2 + cos C/2) (B) k(sin A/2 + sin C/2) (C) k(cot A/2 + cot C/2) (D) k (tan A/2 + tan C/2) Key. C Sol. Taking modulus both sides 2cos B/2 = 1 1 r kR sin A / 2sinC / 2 | BI | k k sin B / 2 k = = uu r  2R = A C ksin 2 sin A / 2 sin C / 2 +       = k (cotA/2 + cotC/2) 70. Let in a triangle ABC, BC BA 1 BI k | BC | | BA | + = uuu r uuu r uu r uuu r uuu r then the diameter of the circumcircle of the ABC is (A) k (cosA/2 + cosC/2) (B) k (sinA/2 + sinC/2) (C) k (cot A/2 + cot C/2) (D) k (tan A/2 + tan C/2) Key. C Sol. Taking modulus both sides 2cosB/2 = A C 4R sin sin 1 1 r 2 2 BI . k k sin B/ 2 k = = uu r  2R = A C ksin 2 A C sin sin 2 2 +       = k (cot A/2 + cot C/2) Mathematics Properties of Triangles 27 71. In ABC, 2 , 3 3 3 A b c  = − = cm and area of ABC = 2 9 3 2 cm , then BC = A) 6 3 cm B) 9cm C) 18cm D) 27cm Key. B Sol. 2 2 2 2 1 2 9 3 sin 18 36 27 63 2 3 2 bc bc b c b c =  =  + − =  + = 2 1 63 2 18 81 9 2 a a − = −  =  = 72. In ABC, if 3 cot ,cot ,cot c a A ac B C a c = = = then which of the following can be true? A) 2 1 a a c + = − B) 2 1 a a c + = + C) 2 2 a a c + = − D) 2 2 a a c + = + Key. A Sol. 2 cot cot , cot cot , cot cot A B C B C a C A a = = = But 2 2 cot cot 1 1 1 A B c a a a a c = + + =  + = −  73. Let AD be a median of ABC. If AE and AF are medians of ABD and ADC respectively and AD = m1, AE = m2, AF = m3, BC = a, then 2 8 a = A) 2 2 2 2 3 1 2 m m m + − B) 2 2 2 1 2 3 2 m m m + − C) 2 2 2 1 3 2 2 m m m + − D) 2 2 2 1 2 3 m m m + + Key. A Sol. ( ) 2 2 2 2 2 2 2 2 2 3 1 2 3 1 2 2 8 a m m m ED m m m + = +  + − = A B C D E F 1 m 2 m 3 m 74. In ABC, 3 A  = and its inradius is 6 units. The radius of the circle touching the sides AB, AC internally and the incircle of ABC externally is Mathematics Properties of Triangles 28 A) 3units B) 3/2 units C) 2 units D) 4 units Key. C Sol. Angle between the direct common tangents is 3  A B C 3  1 6 6 1 2sin 6 3 6 2 r r r r − −  −    =  =   + +   12 2 6 6 3 2 r r r r  − = +  =  = . 75. Three positive real numbers x, y, z satisfy the equations 2 2 3 25, x xy y + + = 2 2 9 y z + = and 2 2 16 x xz z + + = then the value of 2 3 xy yz xz + + is A) 18 B) 24 C) 30 D) 36 Key. B Sol. 4 5 3 x y z 1200 900 1500 Area of triangle = 1 1 3 1 1 1 3 4 2 2 2 2 2 2 xz xy yz  = +  + 24 3 2 xz xy yz  = + + 76. If ma, mb, mc are lengths of medians through the vertices A, B, C of triangle ABC respectively, then length of side c = A) 2 2 2 1 2 2 3 a c b m m m + − B) 2 2 2 2 2 2 3 a c b m m m + − C) 2 2 2 1 2 2 3 a b c m m m + − D) 2 2 2 2 2 2 3 a b c m m m + − Key. D Sol. 2 2 , 3 3 AG ma CG mc = = Mathematics Properties of Triangles 29 A B C D E F G 2 2 2 2 4 4 4 2 9 9 9 c mc ma mb   + = +     77. If the bisector of angle ‘A’ of triangle ABC makes an angle ‘’ with BC , then sin is equal to A) cos 2 B C −       B) sin 2 B C −       C) sin 2 A B   −     D) sin 2 A C   −     Key. A Sol. ( ) 0 180 2 2 B C A B B − + = + = + 0 90 2 B C −   = +     A B C D c b A/2A/2  sin cos 2 B C −   =     78. A circle of diameter ‘2x ’ is drawn on the side BC of triangle ABC such that it touches the sides, AB and AC . Then x = A) 2( )  + b c B) 2 + b c C) 2 bc D) 2 +  b c Key. B Mathematics Properties of Triangles 30 Sol. A B C D x x 1 2 ( ) 2  = +  = + x AB AC x b c 79. If in a triangle ABC, 2 2 3 cos cos 2 2 2 A B c b a + = then minimum value of 2 2 a c b c c a c b + + + − − is equal to A) 2 B) 4 C) 6 D) 8 Key. B Sol. L.H.S. = ( ) 1 cos cos 2 b b A a a B + + + ( ) 1 3 2 2 2 a b c c c a b  + + =  = + 2 2 a c b c a c b c a b c c c a c b b a b a a b + + + + + = + = + + + − − 1/4 2 4 4 c ab         80. A right angled triangle ABC of maximum area is inscribed in a circle of radius R, then (Here  is area and s is semi perimeter, r1, r2, r3 exradi of ABC) A) 2 2R = B) 1 2 3 1 1 1 2 1 r r r R + + + = C) ( ) 2 1 r R = − D) ( ) 2 2 s R = + Key. B Sol. In ABC, 2 AB AC R = = ( ) 2 2 1 , S R R = + = A B C R O R 0 90 Mathematics Properties of Triangles 31 1 2 1 2 1 R r s r R  + = =  = + 81 If an acute angled triangle ABC, if H is the orthocenter AH x ,BH y, CH z = = = then 2 2 2 x y z + + = A. 2 2 2 2 16 ( ) R a b c − + + B. 2 2 2 2 12 ( ) R a b c − + + C. 2 2 2 2 9 ( ) R a b c − + + D. 2 2 2 2 8 ( ) R a b c − + + KEY. B SOL. 2 cos , 2 cos , 2 cos AH R A BH R B CH R C = = = 2 2 2 2 2 2 2 4 (cos cos ) x y z R A cos B C + + = + + 2 2 2 2 4 {3 sin sin sin } R A B C = − − − 2 2 2 2 12 ( ) R a b c = − + + 82 Let ABC be a triangle such that 6 ACB   = and let a,b,c denote the length of the sides opposite to A,B and C respectively . The value of x for which 2 2 1, 1, 2 1 a x x b x c x = + + = − = + is A. 2 3 + B. 2 3 − C. 1 3 + D. 4 3 KEY. C SOL. 0 0 0 120 , 30 , 30 A C B = = = 2 1 2 1 b c x x =  −= + 2 2 2 0 x x − − = 2 4 4(1)( 2) 2 12 2 2 3 1 3 2.1 2 2 x  − −   = = = =  1 3 x  = + 83 In ABC  , D is the midpoint of BC. If AD is perpendicular to AC. Then cosA.cosC= A. 2 2 3 c a ac − B. 2 2 3( ) 2 c a ac − C. 2 2 2( ) 3 c a ac − D. 2 2 2( ) 3 a c ac − KEY. C Mathematics Properties of Triangles 32 SOL. 2 cos 2 b b C a a      =  = 2 2 2 2 2 2 2 2 4 2 a b c b a b c b ab a + − =  + + = 2 2 2 3 a c b − = 2 2 2 2 2 2 2( ) cos .cos 2 3 b c a b c a A c bc a ac + − −   = =     84 In ABC  if r =1, R =5 , 10 = then ab + bc + ca = A. 81 B. 121 C. 141 D. 111 KEY. B SOL. 2 1 2 3 ( ) r r r r s ab bc ca + + + = + + 2 1( 4 ) r r s ab bc ca + + = + + 2 1(1 4.5) 10 100 21 121 ab bc ca + + = + +  + = r s  = 10 1 s = 10 s = 85. If in an equilateral triangle, inradius is a rational number then which of the following is not true? (A) circum-radius is always rational (B) area is always irrational (C) ex-radii are always rational (D) perimeter is always rational Key. D Sol. Clearly R r R Q 2 =   , now 1 A B C r 4R sin cos cos 2 2 2 = 2 1 3 4R Q 2 2     =          . Mathematics Properties of Triangles 33 Similarly r2, r3 Q. Now 3 2 2 abc 3 2R sin Asin Bsin C 2R Q 4R 2   = = =        Also s = a + b + c = 2R(sinA + sinB + sinC) = 3 3R Q =  86. If in equilateral triangle, in-radius is a rational number then which of the following is not true? (A) circum-radius is always rational (B) area is always irrational (C) ex-radii are always rational (D) perimeter is always rational Key. D Sol. Clearly R r R Q 2 =   , now 1 A B C r 4R sin cos cos 2 2 2 = 2 1 3 4R Q 2 2     =          . Similarly r2, r3 Q. Now 3 2 2 abc 3 2R sin Asin Bsin C 2R Q 4R 2   = = =        Also s = a + b + c = 2R(sinA + sinB + sinC) = 3 3R Q =  87. In an isosceles triangle ABC, AB=AC. If vertical angle A is 200, then a3+b3 is equal to a) 2 3a b b) 2 3b c c) 2 3c a d) abc Key. C Sol. 20o A = Q 80o B C =  = Then, b c = sin 20 sin80 sin80 o o o a b c  = = Or sin 20 cos10 o o a b = 2 sin10o a b  = ( )     3 3 3 3 3 3 3 3 2 8 sin 10 2 4sin 10 1 6sin10 3 o o o a b b b b b ac  + = + = + = = 88. Which of the following pieces of data does not uniquely determine acute angled ABC  ( R = circum radius) a) a, sin A ,sinB b) a,b,c c) a, sinB, R d) a, sinA , R Key. D Sol. Q In a ABC  , ( )   2 sin sin sin a b c R A B A B  = = = − + Mathematics Properties of Triangles 34 ( ) 2 sin sin sin a b c R A B A B  = = = + Alternate. (a) : If we know a, sinA, sinB then we can find b, c, A, B and C. Alternate. (b) : We can find A, B, C by using cosine rule. Alternate. (c) : Q a, sinB, R are given then we can find sinA, b and hence. ( )   sin sin sin C A B C  = − + = Alternate. (d) : a, sinA, R are given then we know only the ratio sin b B or ( ) sin c A B + ; we cannot determine the values of b, c, sinB, sinC separately.  Triangle ABC cannot be determined in this case. 89. The incircle of a ABC touches the sides BC, CA, AB at the points D, E, F respectively. If the lengths of BD, CE, AF respectively are consecutive positive integers and the inradius of the triangle is 4 units, then the perimeter of the triangle is A) 42 B) 35 C) 84 D) 57 Key. A Sol. Now applying rs = , we get  90. Tangents at P,Q,R on a circle of radius r form a triangle whose sides are 3r, 4r, 5r then 2 2 2 PR RQ QP + + = A) 2 84 r 5 B) 2 184 r 5 C) 2 176 r 5 D) None of these Key. C Sol. In r AIQ AI sinA/2  = Q AQ = r cot A/2 In ARQ  ( ) ( ) ( )( ) 2 2 RQ AR AQ 2 AR AQ A = + − = 2(AR) sin A/2 RQ = 2r cos A/2 RP = 4rcos C , PQ 4rcos 2 2      =         2 2 2 2 2 2 2 A B C PR RQ QP 16r cos cos cos 2 2 2         + + = + +                 Mathematics Properties of Triangles 35 2 1 cosA 1 cosB 1 16r 2 2 2 + +   = + +     2 2 2 3 4 15 7 176r 8r 3 8r 5 5 5 5 +     = + + = =         91. In a triangle ABC, if a : b : c = 7 : 8 : 9 then cos A : cos B = A) 11 63 B) 22 63 C) 2 9 D) none of these Key. D Sol. 2 2 2 b c a 64 81 49 145 49 96 cosA 2bc 2. 8. 9 144 144 + − + − − = = = = 2 2 2 a c b 49 81 64 66 11 cosB 2bc 2. 7. 9 126 21 + − + − = = = = 92. In a triangle ABC, if cos A + cos B + cos C 7 4 = then R r is equal to A) 3 4 B) 4 3 C) 2 3 D) 3 2 Key. A Sol. 7 cosA cosB cosC 4 + + = A B C 7 1 4sin sin sin 2 2 2 4 + = A B C 3 4sin sin sin 2 2 2 4 = (Q R = 4r sin A/2 sin B/2 sin C/2) R 3 r 4 = 93. In A B C ABCcot cot cot 2 2 2  + + is equal to A) 2 r  B) ( ) 2 a b c , 2R abc + + C) r  D) Rr  Key. A Sol. ( ) ( ) ( ) s s a s s b s s c A B C cot cot cot 2 2 2 − − − + + = = +    ( ) s 3s a b c =  − + +     Mathematics Properties of Triangles 36   2 s 3s 2s s − = =   ( ) 2 2 a b c R a b c 4R abc 2 abc abc 4R + + + +     =  = =         Q also 2 2 2 2 s r r   = =   94. In acute angled triangle ABC, 1 2 3 r r r r + = + and B 3    then A) b + 2c < 2a < 2b + 2c B) b + 4c < 4a < 2b + 4c C) b + 4c < 4a < 4b + 4c D) b + 3c < 3a < 3b + 3c Key. D Sol. 2 3 1 r r r r − = − s s b s c s a     − = − − − − ( ) ( )( ) b a c s s b s a s c − −+ = − − − ( )( ) ( ) s a s c a c s s b b − − − = − ( ) 2 a c tan B/2 b − = But 2 B B 1 , tan ,1 2 6 4 2 3                 1 a c 1 3 b −    b < 3a – 3c < 3b b + 3c < 3a < 3b + 3c 95. In a triangle ABC, A 30 , BC 2 5  =  = + , then the distance of the vertex A from the orthocenter of the triangle is A) 1 B) ( ) 2 5 3 + C) 3 1 2 2 + D) 1 2 Key. B Sol. ( ) a 2 5 2 5 R 2 5 1 2sin A 2sin30 2 2 + + = = = = +   Now, AH = 2R cos A = 2 ( ) 2 5 + cos 30° = ( ) 2 5 3 + 96. If 2 2 2 c a b , 2s a b c = + = + + , then 4s(s – a) (s – b)(s – c) = Mathematics Properties of Triangles 37 A) 4 s B) 2 2 b c C) 2 2 c a D) 2 2 a b Key. D Sol. 2 2 2 c a b C 2  = +   = ( )( )( ) 1 1 1 ab sinC ab s s a s b s c ab 2 2 2  = =  − − − = ( )( )( ) 2 2 4s s a s b s c a b  − − − = . 97. If A b c cot 2 a + = , then the ABC  is A) isosceles B) equilateral C) right angled D) none of these Key. C Sol. A b c cosA/2 sinB sinC cot 2 a sinA/2 sinA + + =  = B C B C 2sin cos cosA/ 2 2 2 A A sinA/ 2 2sin cos 2 2 + −              = A B C A B C cos cos 2 2 2 2 − −    =  =      A = B – C  A + C = B But A + B + C = . Therefore, B 2  = 98. In a triangle ABC, (a + b + c) (b + c – a) =  bc if A) 0  B) 6  C) 0 4  D) 4  Key. C Sol. 2s(2s – 2a) = bc  i.e., ( ) 2 s s a A 4 i.e., sin bc 2 4 −  =  = 0 1 i.e. 0 4 4      Alternative solution ( ) 2 2 b c a bc + − =  ( ) 2 2 2 b c a 2 bc + − = − 2 2 2 b c a 2 2 i.e. cosA 2bc 2 2 + − − − = = 2 1 1 i.e. 2 2 2 2 −  −  −−  i.e. 0 4  99. If ‘a’, ‘b’, ‘c’ are the sides of a triangle than the minimum value of 2a 2b 2c b c a c a b a b c + + + − + − + − is A) 3 B) 9 C) 6 D) 1 Key. C Sol. Let a + b + c = 2S Than we have to find minimum value of Mathematics Properties of Triangles 38 a b c S S S 3 S a S b S c S a S b S c + + = −+ + + − − − − − − Also, S S S 3 S a S b S c S a S b S c 1 S a S b S c 3 S S S S S S + + − − − − − −  + + = − − − + + Q S S S 9 S a S b S c  + +  − − − . Thus minimum value of the expression is 6. 100. In triangle ABC, medians AD and BE are mutually perpendicular, then such a triangle would exist if A) 1 a 1 4 b 2   B) 1 b 3 4 a 4   C) 1 a 3 4 b 4   D) 1 b 2 2 a   Key. D Sol. AD and BE are perpendicular thus 2 2 2 b a 5c + = Since ( ) 2 2 2 a b c a b 5 a b −   +  − 2 2 1 a 4a 10ab 4b 0 2 2 b  − +     101. Consider a given acute angled triangle ABC having O as its circumcentre. Let D be a variable interior point of the side BC. The limiting value of the circumradius of the OCD  as point D approaches towards vertex C is equal to A) R 2cosA B) R cosA C) R sinA D) R 2sin A Key. B Sol. In the adjacent figure we have OCB A 2   = − Let ( ) ODC A A 2 2      = − − + = + −     If R, be the circumradius of OCD  then ( ) ( ) 1 OC R 2R, 2R cos A sin A 2 =  =  −   + −     As 1 R D C 0 2R cosA → →  → 102. If circumradius and inradius of a triangle be 8 and 3, then value of a b c tanA tanB tanC + + equals A) 11 B) 33 C) 44 D) 55 Key. D Sol. a b c a cot A b cot B c cotC tanA tanB tanC + + = + + Mathematics Properties of Triangles 39 = 2 (R + r) = 2 (8 + 3) = 22 Ans. 103. ABCD is a quadrilateral circumscribed about a circle of unit radius then A) C A B D ABsin . sin CD sin sin 2 2 2 2 = B) A B C D ABsin . sin CD sin sin 2 2 2 2 = C) A A C B ABsin . sin CD sin sin 2 2 2 2 = D) A B C D AB sin . cos CD sin cos 2 2 2 2 = Key. B Sol. Let ‘O’ be the centre of circle and ‘P’ be its point of contact with side AB. We have AP = OP. A A cot cot 2 2 = and PB = OP. B B cot cot 2 2 = A B AP PB cot cot 2 2  + = + A B sin 2 AB A B sin .sin 2 2 +       = = Since A B C D A B C 2 2 2 + + + + =   = − A B C D sin sin 2 2 + +      =         A B C D AB. sin . sin sin . sin . CD 2 2 2 2  = 104. In triangle ABC, a : b : c = (1 + x): 1: (1 – x) where ( ) x 0,1  . If A C 2   = +  , then x is equal to A) 1 6 B) 1 2 6 C) 1 7 D) 1 2 7 Key. C Sol. ( ) ( ) A C a 1 x h, b h, c 1 x h, 2 2 4  = + = = − − = A C A C 1 cos . cos sin sin 2 2 2 2 2  + = ( )( ) ( )( )( )( ) 2 S S a S c S b S c S a S b 1 bc.ab bc.ab 2 − − − − − −  + = ( )( ) ( ) ( )( ) S a S c S b S a S c S 1 b ac b ac 2 − − − − −  + = Mathematics Properties of Triangles 40 ( )( ) ( )( ) S a S b S a S b 2S b 1 a c 1 b ac b ac 2 2 − − − − − +    =  =     ( )( ) 2 a c ac 2 b s a s c +    =   − −   Now a + c = 2h, b = h a c a b c 3h 2, s b 2 2 + + +  = = = ( ) ( ) ( ) 1 2x h 1 2x h S a , S c 2 2 − −  − = − = ( ) ( ) 2 2 1 x 4 1 8 x 1 4x 7 +  =  = − Mathematics Progressions of Triangles 1 Progressions of Triangles Integer Answer Type 1. If a, b, c are sides of a triangle satisfying 2 2 2 a b c 6 + + = , then the AM of all the integral values which lie in the interval of ab + bc + ca is Key. 5 Sol. ( ) ( ) ( )   2 2 2 2 2 2 2 2 2 1 a b b c c a 0 a b c ab bc ca a b c 6 2 − + − + −   + +  + +  + +  2 2 2 2 2 a b c 2bcos A b c 2bc cos A 1 = + −  + −  Q 2 2 2 b a c 2ac  + − and 2 2 2 c a b 2ab  + − ( ) 2 2 2 2 2 2 a b c 2 a b c ab bc ca  + +  + + − − − ( ) 2 2 2 1 ab bc ca a b c 3. 2  + +  + + = hence (  ab bc ca 3,6 + +  2. Let the lengths of the altitudes drawn from the veritices of a triangle ABC to the opposite sides are 2,2 and 3. if the area of ABC  is , then find the value of 2 2 Key. 9 Sol. 2 2 2 2, 2, 3 a b C    = = = 2 , , 3 a b c   =  =  = 2 4 2 3 3 3 3          =           2 8 81 = 2 2 9  = 3. If r and R are respectively the radii of the inscribed and circumscribed circles of a regular polygon of n sides such that 5 1 R r = −, then n is equal to Key. 5 Sol. cot ; 2 a r n  = cos 2 a R ec n  = 0 sec 5 1 sec36 R r n   = = −= 5 5 n n    =  = 4. In a 7 , 5, 4 tan 2 9 C ABC a b and  = = = , then measure of side ‘c’ is Key. 6 Mathematics Progressions of Triangles 2 Sol. 7 1 1 9 cos 7 8 1 9 C − = = + 2 1 25 16 2.5.4. 36 8 c  = + − = 5. In triangle ABC 5; 2; / 6 a b A  = = = and 1 2 c and c are the two possible values of third side then 1 2 c c − is Key. 4 Sol. 2 2 2 2 cos a b c ba A = + − 2 3 2.2 . 4 5 0 2 c c − + − = 2 2 3. 1 0 c C − + −= 1 2 1 2 2 3; 1 c c c c + = = − ( ) 2 1 2 16 c c − = 6. The ratios of the lengths of the sides BC and AC of a triangle ABC to the radius of a circumscribed circle are equal to 2 and 3/2 respectively. If the ratio of the lengths of the bisectors of the interior angles B and C is 1 9 then is        − +       Key. 9 Sol. Given 2 a R = ; 3 2 b R = 2 3/ 2 a b R = = 2 cos 2 2 cos 2 CF ac B BE a c ab c a b + = + cos 2 . . cos 2 B a b a c C c b + = + ---------------------(1) 2 a R = Q use here 2 2 2 a b c = + 7 2 R c  = 3 7 sin ;sin 4 4 B = = 2 4 7 cos 2 2 B + = ; Mathematics Progressions of Triangles 3 2 2 4 7 7 cos ;cos 2 2 2 8 B c + = = Now from ------------------------------(1) ( ) 7 7 1 9 2 BE CF − = 7. In . ABC  3a b c = + then cot B/2 cot C/2 is Key. 2 Sol. 3a b c = + 4 2 a s = 2 s a = 2 cot cot 2 2 2 2 B C s a s a a a = = = − − 8. In ABC, if ( ) R a b c ab + = and a = 2+ 2 , then in radius r is Key. 1 Sol. sin 2 2 a b C a b C C R R ab ab + + =  = = ( ) ( ) 2 2 2 cos 1 4 4 a b a b C ab ab + − − = − = But 2 cos 0 2 C a b c    =  = 2 1 2 2 2 1 2 2 2 2 2 2 2 ab a a r a b c S a a  + = = = = = = + + + + + 9. If the median AD of triangle ABC makes an angle 4  with the side BC, then the value of cot cot B C − is Key. 2 Sol. Mathematics Progressions of Triangles 4 A B C D 4  From m – n theorem 2cot cot cot cot cot 2 4 C B B C =  − = 10. If 0 30 , 7 A a = = and b = 8 in ABC, then the number of triangles that can be constructed is Key. 2 Sol. 7 8 8 4 sin 1/ 2 sin 14 7 B B =  = =  B can take two different values.  No.of triangles = 2 11. In ABC if cosA + 2 cosB + cosC = 2, then the value of 2s b (where ‘s’ is the semi peri meter) is Key. 3 Sol. ( ) cos cos 2 1 cos A C B + = − 2 2cos cos 2 2sin 2 2 2 A C A C B + −  =  cos 2 2 2 2 sin 2 A C a c a c b B b −     +    =  =  + = 2 3 3 S b b b  = = 12. In ABC, if r1 = 6, R = 5, r = 2, then the value of 3tanA is Key. 4 Sol. 2 2 1 1 4 sin 4 20sin sin 2 2 2 5 A A A r r R − =  =  = Mathematics Progressions of Triangles 5 1 2 4 2 tan 3tan 4 1 3 1 4 A A  = =  = − 13. In ABC if 0 90 C  = , then the value of a b r R + + is Key. 2 Sol. ( ) 0 90 2 2 a b c r R a b r R + =  + = +  = + 14. Points D, E are taken on the side BC of an acute angled ABC, such that BD=DE=EC. If , , BAD x DAE y EAC z  =  =  = , then the value of ( ) ( ) sin .sin sin .sin x y y z x z + + is Key. 4 Sol. A B C D x y z c b a/3 E a/3 a/3 3sin sin a AD x B = --------- (1) ( ) 2 3sin sin a AE x y B = + --------- (2) ( ) sin (1) (2) 2sin x y AD x AE +  = --------------- (3) ( ) 2 3sin sin a AD y z C = + ----------- (4) 3sin sin a AE z C = ----------- (5) ( ) ( ) ( ) 5 sin 4 2sin y z AE z AD +  = --------------- (6) ( ) ( ) sin sin (3) (6) 4 sin .sin x y y z x z + +    = Mathematics Progressions of Triangles 6 15. If the circumcentre of triangle ABC lies on its incircle, then ( ) 4 cos cos cos A B C + +     (Where  x is greatest integer less than or equal to x is) Key. 5 Sol. 2 2 2 0 SI r R Rr r =  − − = 2 2 2 1 0 R R r r      − −=         2 2 2 1 2 2 R r   = =  2 1 R r  = + cos cos cos 1 1 2 1 2 r A B C R  + + = + = + −= ( ) 4 cos cos cos 4 2 5 A B C    + + = =       16. The area of a cyclic Quadrilateral ABCD is 3 3 4 . The radius of the circle circumscribing the Quadrilateral is 1. If AB = 1, BD = 3 then the value of 3.BC.CD is Key. 6 Sol. 2 1 3 1 2 1 2 AD AD − = + −  2 2 0 AD AD + − = ( )( ) 2 1 0 1 AD AD AD + − =  = A B C D 600 3 O 1200 2 BOD C  = 0 0 1 cos2 60 , 120 2 C C A − =  = = 1 3 1 3 3 3 1 1 . 2 2 2 2 2 BC CD B   +   = . 2 BC CD = Mathematics Progressions of Triangles 7 17. The lengths of the tangents drawn from the vertices A, B, C to the incircle of ABC are 5, 3, 2 respectively. If the lengths of the parts of tangents with in the triangle which are drawn parallel to the sides BC, CA, AB of the triangle to the incircle be , ,  respectively, then   ++  where ( g is G.I.F. is) Key. 6 Sol. A B C R Q P ( )tan 2 A r S a = − 5, 3, 2 r r S a S b S c S a AP =  − = − = − = − 10 5, 3, 8 S a b c  =  = = = 1 1 5 7 8 x y z a b c    + + =  + + = , , tan tan tan 2 2 2 ra rb rc A B C S S S = = = ( ) ( ) ( ) , , S a a S b b S c c S S S − − − = = = 5 5 3 7 2 8 , , 10 10 10    = = =   25 21 16 62 6 10 10 + + ++= = ++= 18. In ABC  , 1 r 1 , r 2 = then the value of A B C 4tan tan tan 2 2 2    +       must be Key. 2 Sol. 1 r B C 1 tan tan r 2 2 2 = = A B C B C 1 tan tan tan 1 tan tan 2 2 2 2 2 2   + = − =      4 A B C tan tan tan 2 2 2   +     = 2 Mathematics Progressions of Triangles 8 19. If a, b, c are sides of a triangle satisfying 2 2 2 a b c 6 + + = , then the AM of all the integral values which lie in the interval of ab + bc + ca is Key. 5 Sol. ( ) ( ) ( )   2 2 2 2 2 2 2 2 2 1 a b b c c a 0 a b c ab bc ca a b c 6 2 − + − + −   + +  + +  + +  2 2 2 2 2 a b c 2bcos A b c 2bc cos A 1 = + −  + −  Q 2 2 2 b a c 2ac  + − and 2 2 2 c a b 2ab  + − ( ) 2 2 2 2 2 2 a b c 2 a b c ab bc ca  + +  + + − − − ( ) 2 2 2 1 ab bc ca a b c 3. 2  + +  + + = hence (  ab bc ca 3,6 + +  20. In ABC  , 1 r 1 , r 2 = then the value of A B C 4tan tan tan 2 2 2    +       must be Key. 2 Sol. 1 r B C 1 tan tan r 2 2 2 = = A B C B C 1 tan tan tan 1 tan tan 2 2 2 2 2 2   + = − =      4 A B C tan tan tan 2 2 2   +     = 2 21. With usual notation in triangle ABC, the numerical value of 1 2 3 1 3 2 a b c a b c r r r r r r     + + + +      + +    is ANS : 4 Sol. 1 2sin / 2cos / 2 2 tan tan 4 sin / 2cos / 2cos / 2 2 2 a A A B C R r R A B C   = = +        1 2 3 1 2 tan 2 4 2 r r r A r s a b c + +   = = =   + +     22. In ( )( )( ) 1 2 2 3 3 1 2 r r r r r r ABC Rs + + +  =__( where 1 2 3 , , r r r are exradii & R is circum radius and s is Semiperimeter of triangle ABC) KEY : 4 23. If in a triangle ABC, 2 2 3 cos cos 2 2 2 A B c b a then + = minimum value of 2 2 a c b c c a c b + + + − − is equal to ………… Key: 4 Mathematics Progressions of Triangles 9 Hint:   ( ) 1/4 2 2 2 1 1 3 cos cos 2 2 2 2 4 4 2 2 1 2 c LHS b b A a a B a b c c a b a c b c a c b c a b c c c c a c b b a b a b a ab a b c ab c ab ab = + + + = + + =  = +   + + + + + = + = + + +     − −     +          Q 24. If length of the side BC of a ABC  is 4cm and 0 BAC 120  = , then the distance between incentre & excentre of the circle touching the side BC internally is Key: 8 Hint: 1 1 II AI AI = − ( ) 1 r r coscA / 2 = − atanA / 2coscA / 2 = a 8 cosA / 2 = = 25. In ABC  , 1 r 1 , r 2 = then the value of A B C 4tan tan tan 2 2 2    +       must be Key. 2 Sol. 1 r B C 1 tan tan r 2 2 2 = = A B C B C 1 tan tan tan 1 tan tan 2 2 2 2 2 2   + = − =      4 A B C tan tan tan 2 2 2   +     = 2 26. Given a parallelogram whose acute angle is , if the squares of length of the diagonals are in ratio 1 : 3 tnen a b is (where a,b are the length of the sides) __ Sol. 1 Applying cosines rule in   ABD and BCD, we get + = + 2 2 2 2 1 2 2a 2b d d , where 1 2 d ,d are length of diagonals ( )  1 2 d d given = 2 2 2 1 d 3d  + = 2 2 2 1 a b 2d also ab = 2 1 d  + + = + = 2 2 2 2 1 1 a b 2ab 3ab d 4d ...(1) ( ) ( )  + =  + = 2 2 1 1 a b 4d a b 2d ... 2 using (1) and (2), we get a = b = 1 d i.e. = a 1 b . Mathematics Progressions of Triangles 10 27. In a triangle ABC, the foot of the perpendicular from A divides the opposite side into parts of lengths 3 and 17 and tan = 22 A 7 . Let a PQR is a right angle triangle (right angle at Q) such that  =  A P and PQ = 7 units, the ( ) ( )         Area ABC Area PQR is _ (where [.] denotes the greatest integer function) Sol. 1 tan A = tan(A1 + A2)  = AD 11 hence the area of  =   = 1 ABC 11 20 110 2 Area of PQR =   = 1 22 7 77 2 . ( ) ( )  =       Area ABC 1 Area PQR . 28. ABC is an acute angle triangle,  =  A 30 .H is the orthocentre and M is the midpoint of BC. On the line HM a point T is taken such that HM = MT. If BC = 4cm, then the length of AT is ______ Key. 8 Sol.   BMH and CMT are similar,  =  MBH MCT ,  BH CT, CT ⊥ AC and CT = BH Now, = + = + 2 2 2 2 2 AT TC AC BH AC = + 2 2 2 2 BD cosec C AD cosec C = = =  =  = 2 2 2 2 AB a 16 4 64 AT 8 sin C sin A Mathematics Progressions of Triangles 11 29. If I be the incentre of triangle ABC and R1, R2, R3 be the circum radius of the triangle BIC, CIA, AIB respectively, then maximum value of 2 2 2 2 2 2 1 2 3 a b c R R R + + is ..... Key. 9 30. The radii r1,r2,r3 of inscribed circles of the triangle ABC are in H.P. If its area is 24 sq. cm and its perimeter is 24 cm, then the length of its largest side is Key. 10 31. If in ABC, circle with altitude AD as diameter intersect AB at P and AC at Q such that PQ = R , when , R are area and circumradius of triangle ABC respectively, then  is equal to Key. 1 32. In a triangle ABC, CH and CM are the lengths of the altitude and median to the base AB. If a = 10, b = 26, c = 32 then length (HM) ? Key. 9 33. In a ABC, perpendiculars are drawn from the angles A,B,C of an acute angled triangle on the opposite sides and produced to meet the circumscribing circle at D, E and F respectively. If these produced parts be , ,  respectively. Then the value of tan tan tan a b c A B C   + +        + + is ………… Key. 2 34. In ABC, if ( ) R a b c ab + = and a = 2+ 2 , then in radius r is Key. 1 Sol. sin 2 2 a b C a b C C R R ab ab + + =  = = ( ) ( ) 2 2 2 cos 1 4 4 a b a b C ab ab + − − = − = But 2 cos 0 2 C a b c    =  = 2 1 2 2 2 1 2 2 2 2 2 2 2 ab a a r a b c S a a  + = = = = = = + + + + + Mathematics Progressions of Triangles 12 35. If the median AD of triangle ABC makes an angle 4  with the side BC, then the value of cot cot B C − is Key. 2 Sol. A B C D 4  From m – n theorem 2cot cot cot cot cot 2 4 C B B C =  − = 36. If 0 30 , 7 A a = = and b = 8 in ABC, then the number of triangles that can be constructed is Key. 2 Sol. 7 8 8 4 sin 1/ 2 sin 14 7 B B =  = =  B can take two different values.  No.of triangles = 2 37. In ABC if cosA + 2 cosB + cosC = 2, then the value of 2s b (where ‘s’ is the semi peri meter) is Key. 3 Sol. ( ) cos cos 2 1 cos A C B + = − 2 2cos cos 2 2sin 2 2 2 A C A C B + −  =  cos 2 2 2 2 sin 2 A C a c a c b B b −     +    =  =  + = 2 3 3 S b b b  = = 38. In ABC, if r1 = 6, R = 5, r = 2, then the value of 3tanA is Key. 4 Sol. 2 2 1 1 4 sin 4 20sin sin 2 2 2 5 A A A r r R − =  =  = 1 2 4 2 tan 3tan 4 1 3 1 4 A A  = =  = − Mathematics Progressions of Triangles 13 39. In ABC if 0 90 C  = , then the value of a b r R + + is Key. 2 Sol. ( ) 0 90 2 2 a b c r R a b r R + =  + = +  = + 40. Points D, E are taken on the side BC of an acute angled ABC, such that BD=DE=EC. If , , BAD x DAE y EAC z  =  =  = , then the value of ( ) ( ) sin .sin sin .sin x y y z x z + + is Key. 4 Sol. A B C D x y z c b a/3 E a/3 a/3 3sin sin a AD x B = --------- (1) ( ) 2 3sin sin a AE x y B = + --------- (2) ( ) sin (1) (2) 2sin x y AD x AE +  = --------------- (3) ( ) 2 3sin sin a AD y z C = + ----------- (4) 3sin sin a AE z C = ----------- (5) ( ) ( ) ( ) 5 sin 4 2sin y z AE z AD +  = --------------- (6) ( ) ( ) sin sin (3) (6) 4 sin .sin x y y z x z + +    = 41. If the circumcentre of triangle ABC lies on its incircle, then ( ) 4 cos cos cos A B C + +     (Where  x is greatest integer less than or equal to x is) Key. 5 Sol. 2 2 2 0 SI r R Rr r =  − − = Mathematics Progressions of Triangles 14 2 2 2 1 0 R R r r      − −=         2 2 2 1 2 2 R r   = =  2 1 R r  = + cos cos cos 1 1 2 1 2 r A B C R  + + = + = + −= ( ) 4 cos cos cos 4 2 5 A B C    + + = =       42. The lengths of the tangents drawn from the vertices A, B, C to the incircle of ABC are 5, 3, 2 respectively. If the lengths of the parts of tangents with in the triangle which are drawn parallel to the sides BC, CA, AB of the triangle to the incircle be , ,  respectively, then   ++  where ( g is G.I.F. is) Key. 6 Sol. A B C R Q P ( )tan 2 A r S a = − 5, 3, 2 r r S a S b S c S a AP =  − = − = − = − 10 5, 3, 8 S a b c  =  = = = 1 1 5 7 8 x y z a b c    + + =  + + = , , tan tan tan 2 2 2 ra rb rc A B C S S S = = = ( ) ( ) ( ) , , S a a S b b S c c S S S − − − = = = 5 5 3 7 2 8 , , 10 10 10    = = =   25 21 16 62 6 10 10 + + ++= = ++= Mathematics Progressions of Triangles 15 43. Let ABC and 1 ABC be two non congruent triangles with sides 1 4, 2 2 AB AC AC = = = and angle 0 30 B = , the absolute value of the difference between the area of these triangles is KEY. 4 SOL. 1 0 4, 2 2, 30 AB AC AC B = = = = 1 0 4, 2 2, 30 C b b B = = = = 0 45 sin30 sin b c C C =  = 1 0 90 CAC = 1 1 Area of ABC ABC ACC  − =  1 1 . 2 AC AC = 1 2 2.2 2 4 2 = = 44. Consider a triangle ABC and let a,b and c denote the lengths of the sides opposite to the vertices A,B and C respectively .Suppose a = 6, b = 10 and the area of the triangle is 15 3 . If ACB  is obtuse and if ‘r’ denote the radius of the incirlce of the triangle then 2 r = KEY. 3 SOL. 0 1 3 sin sin 120 2 2 ab C C C =  =  = 2 2 2 2 cos 14 c a b ab C c = + −  = 2 3 3 r r s  = =  = Mathematics Progressions of Triangles 16 45. If 0 60 B = , 0 45 C = and D divides BC internally in the ratio 1 : 3 and sin sin CAD BAD  = , then 3 2 6 3    + − + = KEY. 9 SOL. By sine Rule sin 6 sin CAD BAD  = = 3 2 6 3 6 6 6 6 6 3 9    + − + = + − + = 46. If 1 2 p ,p and 3 p are the altitudes of a triangle from vertices A,B and C respectively, and  is the area of the triangle, prove that ( ) 2 1 2 3 1 1 1 2ab C cos p p p a b c 2 + − = + +  Ans. 1 2 3 1 1 1 a b c 2s 2c s c p p p 2 2 + − − − + − = = =    Sol. We have ( ) 2 s s c C cos 2 ab − = so that ( ) ( ) 2 s s c 2ab C 2ab s c cos . a b c 2 2s ab − − = = + +    Now, the area of triangle ABC is 1 1 1 ap , i.e., p 2 /a 2 = =  . Similarly, 2 p 2 / b =  and 3 p 2 /c =  . 1 2 3 1 1 1 a b c 2s 2c s c p p p 2 2 + − − − + − = = =    47. In a ABC  , the angles A,B,C are in A.P. Show that 2 2 A C a c 2cos 2 a ac c − + = − + Ans. A C 2cos 2 − Sol. ( ) ( ) 2 2 2 2 A C A C 2sin cos a c sin A sinC 2 2 cos A C cos A C a ac c sin A sin AsinC sin C 1 cos2A 1 cos2C 2 2 2 + − + + = = − − + − + − + − − − + ( ) ( ) ( ) 3 A C 2 2, cos 2 2 2 cos2A cos2C cos A C cos A C − = − + − − + + ( ) ( ) ( ) ( ) ( ) A C A C 6cos 6 cos 2 2 3 3 2cos A C cos A C cos A C cos A C cos A C 2 2 − − = = − + − − − + − − − A C 2cos 2 − = Mathematics Progressions of Triangles 17 48. Let AD, BE, CF be the length of internal bisectors of angles A,B,C of triangle ABC. Show that the harmonic mean of AD A B C sec . BE sec ,CFsec 2 2 2 is the harmonic mean of the sides of the triangle Key. 1 1 A a ADsec 2  =  Sol. 2bc A 1 1 1 1 AD cos A b c 2 2 b c ADsec 2   =  = +   +   1 1 A a ADsec 2   =  49. Let ABC be a triangle with altitudes 1 2 3 h ,h ,h and inradius r. Prove that 3 1 2 1 2 3 h r h r h r 6 h r h r h r + + + + +  − − − Ans. Sol. 3 1 2 1 2 3 h r h r h r 6 h r h r h r + + + + +  − − − 1 1 2 ah h 2 a  =  = Similarly 1 3 2 2 h , h b c   = = So 3 1 2 1 2 3 h r h r h r 2 /a /s 2 / b /s 2 /c /s h r h r h r 2 /a /s 2 /c /s 2 /c /s + + +  +   +   +  + = = + + − − −  −  −  − 2s a 2s b 2s c 4s 4s 4s 3 2s a 2s b 2s c 2s a 2s b 2s c + + + = = + = + + − − − − − − − 1 4s 4s 4s 3 3 3 3 3 2s a 2s b 2s c 3 2s a 2s b 2s c 4s 4s 4s         = + + −  =       − − − − − −       + +     Since ( ) AM HM 6   50. Find the point inside a  from which the sum of the squares of distances to the three sides is minimum. Also find the minimum value of the sum of squares of distances. Ans. ( )( )( ) 2 2 2 4 s a s b s c s a b c − − − + + Sol. If a,b, c are the lengths of the sides of the  and x,y,z are the length of perpendicular from the point on the sides BC, CA, AB respectively we have to minimize 2 2 2 x y z t + + = we have 1 1 1 ax by cz 2 2 2 + + =   ax + by + cz = 2 Where  is the area of the ABC  we have the identity; ( )( ) ( ) ( ) ( ) ( ) 2 2 2 2 2 2 2 2 2 2 x y z a b c ax bx cz ax by by cz cz ax  + + + + − + + = − + − + + ( )( ) ( ) 2 2 2 2 2 2 2 x y z a b c ax by cz  + + + +  + + ( )( ) 2 2 2 2 2 2 2 x y z a b c 4  + + + +  Mathematics Progressions of Triangles 18 2 2 2 2 2 2 2 4 x y z a b c   + +  + + and equality only when 2 2 2 2 2 2 x y z ax by cz 2 a b c a b c a b c + +  − − = = + + + + The minimum value of t is 2 2 2 2 4 a b c  + + ( )( )( ) 2 2 2 4 s a s b s c s t min a b c − − − = + + Ans.
188876
https://www.quora.com/Where-is-the-endoplasmic-reticulum-located
Where is the endoplasmic reticulum located? - Quora Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In Science Membranous Organelles Cell Biology Biology Endoplasmic Reticulum (ER... Cellular Morphology Cytoplasmic Organelle Cell Bodies Organelles 5 Where is the endoplasmic reticulum located? All related (39) Sort Recommended Prem Prakash Gupta Professor of Biochemistry at Seema Dental College and Hospital (2003–present) · Author has 4.1K answers and 10.2M answer views ·7y Originally Answered: Where is the endoplasmic reticulum found in the cell? · Endoplasmic Reticulum (ER) is distributed throughout the cytoplasm and also exists in continuity with the cell membrane and nuclear membrane. There are two types of ER: (i) Rough endoplasmic reticulum (RER) (ii) Smooth endoplasmic reticulum (SER) (i) RER: The cytosolic membrane surface of ER is studded or coated with 80S ribosomes. The presence of ribosomes on the membrane surface of RER gives it a r Continue Reading Endoplasmic Reticulum (ER) is distributed throughout the cytoplasm and also exists in continuity with the cell membrane and nuclear membrane. There are two types of ER: (i) Rough endoplasmic reticulum (RER) (ii) Smooth endoplasmic reticulum (SER) (i) RER: The cytosolic membrane surface of ER is studded or coated with 80S ribosomes. The presence of ribosomes on the membrane surface of RER gives it a rough appearance. RER is found in large amounts in cells actively involved in synthesis and secretion (‘export’) of proteins, e.g. hepatic cells, pancreatic cells, B lymphocytes, fibroblasts and goblet cells. In these cells, RER takes part in ‘synthesis’, ‘package’ and ‘export’ of ‘secretory proteins’, like insulin, digestive enzymes, blood clotting proteins, antibodies, collagen, mucin, etc. (ii) SER: The cytosolic membrane surface of ER is not studded with ribosomes and has ‘ribosome free’ smooth membrane surface. SER takes part in biosynthesis of lipids (triacylglycerols, phospholipids, glycolipids and cholesterol) and steroid hor... Sponsored by Grammarly Stuck on the blinking cursor? Move your great ideas to polished drafts without the guesswork. Try Grammarly today! Download 99 34 Related questions More answers below Where is the rough endoplasmic reticulum located? Where is the smooth endoplasmic reticulum located? Where is the endoplasmic reticulum found in the cell? What is the endoplasmic reticulum made up of? Does the endoplasmic reticulum have a polar structure? Lynette El Sherif Former International polymath · Author has 314 answers and 1M answer views ·7y Originally Answered: What are the locations of the endoplasmic reticulum? · Endoplasmic Reticulum (Rough and Smooth) Rough ER (RER) is involved in some protein production, protein folding, quality control and despatch. It is called ‘rough’ because it is studded with ribosomes Smooth E R (SER) is associated with the production and metabolism of fats and steroid hormones. It is ‘smooth’ because it is not studded with ribosomes and is associated with smooth slippery fats. CELLS NEED THE ROUGH AND THE SMOOTH Think of a cell as a “multitude of membranes” we said in an earlier section. This statement certainly applies to the endoplasmic reticulum an organelle found in eukaryoti Continue Reading Endoplasmic Reticulum (Rough and Smooth) Rough ER (RER) is involved in some protein production, protein folding, quality control and despatch. It is called ‘rough’ because it is studded with ribosomes Smooth E R (SER) is associated with the production and metabolism of fats and steroid hormones. It is ‘smooth’ because it is not studded with ribosomes and is associated with smooth slippery fats. CELLS NEED THE ROUGH AND THE SMOOTH Think of a cell as a “multitude of membranes” we said in an earlier section. This statement certainly applies to the endoplasmic reticulum an organelle found in eukaryotic cells. About 50% of the total membrane surface in an animal cell is provided by endoplasmic reticulum (ER). The organelle called ‘endoplasmic reticulum’ occurs in both plants and animals and is a very important manufacturing site for lipids (fats) and many proteins. Many of these products are made for and exported to other organelles. This is an electron microscope image showing part of the rough endoplasmic reticulum in a plant root cell from maize. The dark spots are ribosomes.(courtesy of Chris Hawes, The Research School of Biology & Molecular Sciences, Oxford Brookes University, Oxford, UK) There are two types of endoplasmic reticulum: rough endoplasmic reticulum (rough ER) and smooth endoplasmic reticulum (smooth ER). Both types are present in plant and animal cells. The two types of ER often appear as if separate, but they are sub-compartments of the same organelle. Cells specialising in the production of proteins will tend to have a larger amount of rough ER whilst cells producing lipids (fats) and steroid hormones will have a greater amount of smooth ER. Part of the ER is contiguous with the nuclear envelope. The Golgi apparatus is also closely associated with the ER and recent observations suggest that parts of the two organelles, i.e. the ER and the Golgi complex, are so close that some chemical products probably pass directly between them instead of being packaged into vesicles (droplets enclosed within a membrane) and transported to them through the cytoplasm ROUGH ENDOPLASMIC RETICULUM This is an extensive organelle composed of greatly convoluted but flattish sealed sacs, which are contiguous with the nuclear membrane. It is called ‘rough’ endoplasmic reticulum because it is studded on its outer surface (the surface in contact with the cytosol) with ribosomes. These are called membrane bound ribosomes and are firmly attached to the outer cytosolic side of the ER About 13 million ribosomes are present on the RER in the average liver cell. Rough ER is found throughout the cell but the density is higher near the nucleus and the Golgi apparatus. Ribosomes on the rough endoplasmic reticulum are called ‘membrane bound’ and are responsible for the assembly of many proteins. This process is called translation. Certain cells of the pancreas and digestive tract produce a high volume of protein as enzymes. Many of the proteins are produced in quantity in the cells of the pancreas and the digestive tract and function as digestive enzymes. The rough ER working with membrane bound ribosomes takes polypeptides and amino acids from the cytosol and continues protein assembly including, at an early stage, recognising a ‘destination label’ attached to each of them. Proteins are produced for the plasma membrane, Golgi apparatus, secretory vesicles, plant vacuoles, lysosomes, endosomes and the endoplasmic reticulum itself. Some of the proteins are delivered into the lumen or space inside the ER whilst others are processed within the ER membrane itself. In the lumen some proteins have sugar groups added to them to form glycoproteins. Some have metal groups added to them. It is in the rough ER for example that four polypeptide chains are brought together to form haemoglobin. Protein folding unit It is in the lumen of the rough ER that proteins are folded to produce the highly important biochemical architecture which will provide ‘lock and key’ and other recognition and linking sites. Protein quality control section It is also in the lumen that an amazing process of quality control checking is carried out. Proteins are subjected to a quality control check and any that are found to be incorrectly formed or incorrectly folded are rejected. These rejects are stored in the lumen or sent for recycling for eventual breakdown to amino acids. A type of emphysema (a lung problem) is caused by the ER quality control section continually rejecting an incorrectly folded protein. The protein is wrongly folded as a result of receiving an altered genetic message. The required protein is never exported from the lumen of rough ER. Research into protein structure failures relating to HIV are also focusing on reactions in the ER. Rigorous quality control plays a part in cystic fibrosis A form of cystic fibrosis is caused by a missing single amino acid, phenylanaline, in a particular position in the protein construction. The protein might work well without the amino acid but the very exacting service provided by the quality control section spots the error and rejects the protein retaining it in the lumen of the rough ER. In this case the customer (the person with cystic fibrosis) loses out completely due to high standards when a slightly poorer product would have been better than no product at all. From Rough ER to Golgi In most cases proteins are transferred to the Golgi apparatus for ‘finishing’. They are conveyed in vesicles or possibly directly between the ER and Golgi surfaces. After ‘finishing’ they are delivered to specific locations. SMOOTH ENDOPLASMIC RETICULUM Smooth ER is more tubular than rough ER and forms an interconnecting network sub-compartment of ER. It is found fairly evenly distributed throughout the cytoplasm. It is not studded with ribosomes hence ‘smooth’ ER. Smooth ER is devoted almost exclusively to the manufacture of lipids and in some cases to the metabolism of them and associated products. In liver cells for example smooth ER enables glycogen that is stored as granules on the external surface of smooth ER to be broken down to glucose. Smooth ER is also involved in the production of steroid hormones in the adrenal cortex and endocrine glands. Smooth ER – the detox stop Smooth ER also plays a large part in detoxifying a number of organic chemicals converting them to safer water-soluble products. Large amounts of smooth ER are found in liver cells where one of its main functions is to detoxify products of natural metabolism and to endeavour to detoxify overloads of ethanol derived from excess alcoholic drinking and also barbiturates from drug overdose. To assist with this, smooth ER can double its surface area within a few days, returning to its normal size when the assault has subsided. The contraction of muscle cells is triggered by the orderly release of calcium ions. These ions are released from the smooth endoplasmic reticulum. SUMMARY Endoplasmic reticulum is an organelle found in both eukaryotic animal and plant cells. It often appears as two interconnected sub-compartments, namely rough ER and smooth ER. Both types consist of membrane enclosed, interconnected flattened tubes. The rough ER, studded with millions of membrane bound ribosomes, is involved with the production, folding, quality control and despatch of some proteins. Smooth ER is largely associated with lipid (fat) manufacture and metabolism and steroid production hormone production. It also has a detoxification function. Upvote · 9 3 Assistant Bot · Sep 6 The endoplasmic reticulum (ER) is an extensive membranous network located in the cytoplasm of eukaryotic cells. It is continuous with the outer membrane of the nuclear envelope and extends throughout the cytosol, forming sheets, tubules, and cisternae that interact with other organelles (Golgi apparatus, mitochondria, plasma membrane, endosomes). Key points Cellular compartment: cytoplasm of eukaryotic cells (not present in prokaryotes). Continuity with nucleus: ER membrane is contiguous with the outer nuclear membrane, so the ER lumen is topologically continuous with the perinuclear space. Subdom Continue Reading The endoplasmic reticulum (ER) is an extensive membranous network located in the cytoplasm of eukaryotic cells. It is continuous with the outer membrane of the nuclear envelope and extends throughout the cytosol, forming sheets, tubules, and cisternae that interact with other organelles (Golgi apparatus, mitochondria, plasma membrane, endosomes). Key points Cellular compartment: cytoplasm of eukaryotic cells (not present in prokaryotes). Continuity with nucleus: ER membrane is contiguous with the outer nuclear membrane, so the ER lumen is topologically continuous with the perinuclear space. Subdomains: Rough ER (RER): ribosome-studded regions usually located near the nucleus; major site of membrane and secretory protein synthesis. Smooth ER (SER): ribosome-free tubular regions distributed throughout the cytoplasm; involved in lipid synthesis, calcium storage, detoxification. Spatial relationships: ER forms contact sites with mitochondria (mitochondria-associated membranes), Golgi, endosomes and plasma membrane to coordinate lipid and ion exchange and membrane trafficking. Morphology and distribution vary by cell type: secretory cells have abundant RER; muscle cells have specialized SER (sarcoplasmic reticulum) closely associated with contractile apparatus. Upvote · Josh Steinke Research ·10y The endoplasmic reticulum is located near the nucleus of the cell, where protein synthesis is carried out as a result of DNA translation/transcription. The function of the rough endoplasmic reticulum is to secrete and produce different proteins caused by different amino acids-polypeptide strands that are formed from ribosomes. Upvote · 9 7 9 1 Related questions Where is the rough endoplasmic reticulum located? Where is the smooth endoplasmic reticulum located? Where is the endoplasmic reticulum found in the cell? What is the endoplasmic reticulum made up of? Does the endoplasmic reticulum have a polar structure? What are the facts about endoplasmic reticulum? What are some examples of the endoplasmic reticulum? How is the endoplasmic reticulum formed? What is the granular endoplasmic reticulum? How did the endoplasmic reticulum originate? Where is the endoplasmic reticulum located in a plant cell? What is the endoplasmic reticulum? What is the structure of the endoplasmic reticulum with parts? What is the definition of a smooth endoplasmic reticulum? Does the endoplasmic reticulum have DNA? Related questions Where is the rough endoplasmic reticulum located? Where is the smooth endoplasmic reticulum located? Where is the endoplasmic reticulum found in the cell? What is the endoplasmic reticulum made up of? Does the endoplasmic reticulum have a polar structure? What are the facts about endoplasmic reticulum? Advertisement About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
188877
https://www.allaboutcircuits.com/technical-articles/understanding-poles-and-zeros-in-transfer-functions/
Network Sites: Latest News Technical Articles Latest News Technical Articles Market Insights Education Latest Projects Education Our Summit Series - Every Wednesday in October. Learn More Log In Join Log in Join AAC Or sign in with Facebook Google LinkedIn GitHub 0:00 / 0:00 Podcast Latest Subscribe Google Spotify Apple iHeartRadio Stitcher Pandora Tune In Technical Article Understanding Poles and Zeros in Transfer Functions Join our Engineering Community! Sign-in with: 3 Home Technical Articles Understanding Poles and Zeros in Transfer Functions Technical Article Understanding Poles and Zeros in Transfer Functions May 26, 2019 by Robert Keim This article explains what poles and zeros are and discusses the ways in which transfer-function poles and zeros are related to the magnitude and phase behavior of analog filter circuits. This article explains what poles and zeros are and discusses the ways in which transfer-function poles and zeros are related to the magnitude and phase behavior of analog filter circuits. In the previous article, I presented two standard ways of formulating an s-domain transfer function for a first-order RC low-pass filter. Let’s briefly review some essential concepts. A transfer function mathematically expresses the frequency-domain input-to-output behavior of a filter. We can write a transfer function in terms of the variable s, which represents complex frequency, and we can replace s with jω when we need to calculate magnitude and phase response at a specific frequency. The standardized form of a transfer function is like a template that helps us to quickly determine the filter’s defining characteristics. Mathematical manipulation of the standardized first-order transfer function allows us to demonstrate that a filter’s cutoff frequency is the frequency at which magnitude is reduced by 3 dB and phase is shifted by –45°. Poles and Zeros Let’s assume that we have a transfer function in which the variable s appears in both the numerator and the denominator. In this situation, at least one value of s will cause the numerator to be zero, and at least one value of s will cause the denominator to be zero. A value that causes the numerator to be zero is a transfer-function zero, and a value that causes the denominator to be zero is a transfer-function pole. Let’s consider the following example: $$T(s)=\frac{Ks}{s+\omega _{O}}$$ In this system, we have a zero at s = 0 and a pole at s = –ωO. Poles and zeros are defining characteristics of a filter. If you know the locations of the poles and zeros, you have a lot of information about how the system will respond to signals with different input frequencies. The Effect of Poles and Zeros A Bode plot provides a straightforward visualization of the relationship between a pole or zero and a system’s input-to-output behavior. A pole frequency corresponds to a corner frequency at which the slope of the magnitude curve decreases by 20 dB/decade, and a zero corresponds to a corner frequency at which the slope increases by 20 dB/decade. In the following example, the Bode plot is the approximation of the magnitude response of a system that has a pole at 102 radians per second (rad/s) and a zero at 104 rad/s. Phase Effects In the previous article, we saw that the mathematical origin of a low-pass filter’s phase response is the inverse tangent function. If we use the inverse tangent function (more specifically, the negative inverse tangent function) to generate a plot of phase (in degrees) versus logarithmic frequency, we end up with the following shape: The Bode plot approximation for phase shift generated by a pole is a straight line representing –90° of phase shift. The line is centered on the pole frequency and has a slope of –45 degrees per decade, which means that the downward-sloping line begins one decade before the pole frequency and ends one decade after the pole frequency. The effect of a zero is the same except that the line has a positive slope, such that the total phase shift is +90°. The following example represents a system that has a pole at 102 rad/s and a zero at 105 rad/s. The Hidden Zero If you have read the previous article, you know that the transfer function of a low-pass filter can be written as follows: $$T(s)=\frac{a_{O}}{s+\omega _{O}}$$ Does this system have a zero? If we apply the definition given earlier in this article, we will conclude that it does not—the variable s does not appear in the numerator, and therefore no value of s will cause the numerator to equal zero. It turns out, though, that it does have a zero, and to understand why, we need to consider a more generalized definition of transfer-function poles and zeros: a zero (z) occurs at a value of s that causes the transfer function to decrease to zero, and a pole (p) occurs at a value of s that causes the transfer function to tend toward infinity: $$\lim_{s\rightarrow z}T(s)=0$$ $$\lim_{s\rightarrow p}T(s)=∞$$ Does the first-order low-pass filter have a value of s that results in T(s) → 0? Yes, it does, namely, s = ∞. Thus, the first-order low-pass system has a pole at ωO and a zero at ω = ∞. I’ll attempt to provide a physical interpretation of the zero at ω = ∞: It indicates that the filter cannot continue attenuating “forever” (where “forever” refers to frequency, not time). If you manage to create an input signal whose frequency continues to increase until it “reaches” infinity rad/s, the zero at s = ∞ causes the filter to stop attenuating, i.e., the slope of the magnitude response increases from –20 dB/decade to 0 dB/decade. Conclusion We’ve explored the basic theoretical and practical aspects of transfer-function poles and zeros, and we’ve seen that we can create a direct relationship between a filter’s pole and zero frequencies and its magnitude and phase response. In the next article, we’ll examine the transfer function of a first-order high-pass filter. Related Content Safety in Sensing: Sensor Technology in Smart Cities Reducing Distortion in Tape Recordings with Hysteresis in SPICE Test & Measurement in Quantum Computing Open RAN – Network Performance in the Lab and in the Field Omron’s IoT Module: Advancements in Weather Sensing and Data Transfer Managing Electromagnetic Interference in PCB Design Learn More About: transfer function poles and zeros Comments 3 Comments P pylee2001 May 28, 2019 Implementing a Low-Pass Filter on FPGA with Verilog Like. Reply Lonne Mays May 31, 2019 Excellent article, Robert! The mathematical subject matter was presented clearly and supported by explanations that supported an intuitive understanding. Like. Reply M Mauricio95 June 10, 2022 “Thus, the first-order low-pass system has a pole at ωO” Shouldn’t this be a pole at -ωO? Like. Reply Load more comments You May Also Like #### Design Impacts of Moving to 48V In Partnership with Samtec #### Titan Haptics’ Dev Kit Puts Tactile Feedback Options at Your Fingertips by Duane Benson #### Dukosi Releases Reference Design for Grid-Scale BESS by Austin Futrell #### Why Smart Meter Accuracy Starts With Embedded Design by Umair Ejaz, Tuxera #### New Power Components Tackle Tough Design Challenges by Jake Hertz Welcome Back Or sign in with Facebook Google Linkedin GitHub Continue to site QUOTE OF THE DAY
188878
https://funexpectedapps.com/en/blog-posts/key-spatial-skills-for-kindergarten-what-to-teach
Key Spatial Skills for Kindergarten: What to Teach Early Childhood Education Aug 20, 2025 Explore essential spatial skills for kindergarteners, including shape recognition and hands-on activities to foster their learning and development. Spatial skills help young kids understand shapes, objects, and spaces, building a foundation for math, reading, and STEM success. Here’s what to teach and how to practice: Core Spatial Skills to Teach: Spatial Vocabulary: Use terms like "above", "next to", and "inside." Spatial Relationships: Understand how objects relate to each other and learn basic mapping. Shape Recognition: Identify and describe basic shapes like circles and squares. Composing/Decomposing Shapes: Combine or break apart shapes (e.g., two triangles make a square). Mental Rotation: Visualize and rotate objects in the mind. Pattern Recognition: Spot, create, and predict patterns. Fun Activities to Build Spatial Skills: Block Play: Build towers, bridges, and more. Puzzles: Solve puzzles to improve problem-solving and visualization. Art & Drawing: Sketch objects from different angles or create patterns. Movement Games: Play obstacle courses to learn spatial terms (e.g., "under" or "through"). Direction Games: Practice commands like "place the ball behind the chair." Map Activities: Create simple maps of familiar places like the classroom. Teaching these skills early makes learning fun and sets kids up for long-term success in school and life. Core Spatial Skills for Kindergarten Students 1. Basic Shape Recognition Recognizing shapes is one of the first steps in developing spatial awareness. Kindergarteners start by identifying basic shapes - like circles, squares, and triangles - and understanding their properties, such as the number of sides and corners. This ability helps them recognize shapes in the world around them. Learning shapes with the Funexpected Math app "Improving spatial experiences prior to school entry is likely to increase children's readiness for school… Optimizing spatial performance may be an underutilized route to improving mathematics achievement." - Verdine et al, 2017 In other words, helping children recognize and work with shapes is more than a visual task — it’s a crucial part of nurturing spatial and mathematical thinking. 2. Shape Building and Breaking This skill focuses on how shapes can be put together or taken apart. For example, a child might combine two triangles to form a square or break a rectangle into smaller pieces. Research highlights that five-year-olds who can successfully manipulate shapes like this tend to develop stronger math abilities and more advanced spatial reasoning later on . Tangram puzzle in the Funexpected Math app 3. Position and Direction Words Understanding spatial vocabulary is essential for describing where things are or how they move. Terms like "above", "below", "inside", and "next to" help children communicate spatial relationships clearly. Some key words include: Above / below In front / behind Next to / beside Inside / outside Between / through Left / right Learning spatial vocabulary with the Funexpected Math app Using these words regularly helps kids create mental maps of their surroundings. The concepts of “right” and “left” are extremely important for the overall development of children’s spatial skills. They are connected with many other, more complex concepts – for example, the concept of symmetry. The depth of understanding of “right” and “left” strongly depends on age. At 4–5 years, children can usually identify their own left hand and find objects located to the left or right of themselves. At 5–6 years, children are able to identify the left and right hands of another person by mentally placing themselves next to them – and they also begin to understand how mirror reflection works. A task on mirror reflections in the Funexpected Math app 4. Mental Rotation Skills Mental rotation is the ability to visualize objects or shapes from different angles. This skill is critical for spatial reasoning and can be developed through both physical interaction and mental visualization. Children practice: Imagining how objects look from various angles Predicting how figures change when rotated Matching rotated objects to their original form Math game on mental rotation: choose the tile that matches the central shape These activities help them better understand how objects relate to one another in space. 5. Understanding Space and Maps Spatial awareness involves grasping how objects relate to each other and learning basic mapping skills. Children begin to: Estimate distances Navigate through spaces Follow simple maps Understand directional concepts Example from the Funexpected Math app (Advanced Program for ages 5–6) These early experiences with space and maps lay the groundwork for more advanced geography and geometry skills. 6. Pattern Recognition Learning to recognize patterns with the “Gods” game in the Funexpected Math app Recognizing and working with patterns is a key part of both spatial reasoning and math. Dr. Nora Newcombe from Temple University explains that strong spatial reasoning in preschool supports math learning in later years . Kindergarteners practice identifying, creating, and predicting patterns, which helps them notice relationships and structures in the world around them. Research underscores the importance of these spatial skills in shaping future success, especially in STEM fields. In fact, studies suggest that focused training in spatial reasoning could significantly increase the number of students with the spatial abilities needed for engineering and other technical careers . These foundational skills are the building blocks for more advanced spatial learning and problem-solving. Hands-on Activities for Teaching Spatial Skills Engaging in hands-on activities turns learning spatial skills into fun, practical experiences that kids can easily relate to. 1. Building with Blocks Playing with blocks is a classic way for kindergarteners to develop spatial skills. Studies show that structured block play boosts performance on spatial visualization and mental rotation tasks . Create a play area with a variety of blocks, and join in by using spatial language and setting simple challenges, like building towers or bridges. 2. Working with Puzzles Puzzles are a fantastic tool for enhancing problem-solving skills and spatial reasoning. They also improve focus, visual perception, and memory . Regular puzzle-solving strengthens these abilities over time. For instance, while putting pieces together, kids practice recognizing shapes, patterns, and how parts fit into a whole. 3. Drawing and Art Projects Drawing encourages spatial visualization by activating different parts of the brain. Activities like sketching objects from different perspectives or creating art from verbal prompts are especially effective. Some fun ideas include: Sketching objects from various angles Creating "secret messages" with wax and watercolors Drawing three-dimensional objects onto paper These projects combine creativity with spatial reasoning, making learning both fun and educational. 4. Active Play and Movement Physical play helps kids grasp spatial concepts through movement. Games that involve directions and positions - like obstacle courses where kids crawl under, jump over, or move through objects - are excellent for reinforcing these ideas in a hands-on way. 5. Direction Word Games Simple games that use directional commands are great for teaching spatial vocabulary. For example, ask kids to "place the ball behind the chair" or "stand between two friends." These interactive activities make learning spatial terms both active and memorable. 6. Making Simple Maps Introduce mapping by focusing on familiar spaces like the classroom or playground. Start with basic maps of these areas, then gradually introduce ideas like scale and perspective. This helps kids understand spatial relationships and navigate their surroundings better. | Activity Type | Benefits | Key Spatial Words to Use | --- | Block Play | Improves mental rotation and construction | Above, below, next to, on top of, underneath | | Puzzles | Enhances problem-solving and perception | Inside, outside, rotate, fit, match, edge | | Drawing | Builds visualization and proportion skills | Behind, in front, beside, between, around | | Movement | Develops physical spatial awareness | Through, around, between, over, under | | Direction Games | Strengthens spatial vocabulary | Left, right, forward, behind, near | | Mapping | Teaches navigation and relationships | Near, far, across, above, below, around | Incorporating these activities into everyday lessons helps reinforce spatial skills in a natural, enjoyable way. Adding Spatial Skills to Daily Lessons Math and Shape Lessons Incorporating spatial skills into math lessons lays a solid groundwork for geometry and spatial reasoning. Start by encouraging students to visualize and mentally manipulate shapes before introducing physical materials. For example, use pattern blocks to create specific shapes. Challenge students to form a hexagon using different combinations of smaller shapes – this strengthens their ability to visualize spatial relationships. These activities not only enhance mathematical understanding but also naturally transition into hands-on science projects that further develop spatial thinking. Digital tools can also support this kind of learning. In the Funexpected Math app, children explore spatial concepts through interactive puzzles, visual logic games, and geometry-based challenges. These tasks provide a playful yet structured way to practice spatial reasoning and reinforce skills like mental rotation, composing and decomposing shapes, and navigating directions — all within an engaging digital environment. Parents and educators praise its impact. A parent from Big Tech shared: "I have a math background myself and I wanted to find an app with a wide curriculum. Smth about logic and geometry, not just counting. And this one was the perfect choice. I see how my daughter starts to understand even complex concepts." – Jon Favertt969 Science and Building Projects Science activities are a fantastic way to nurture spatial reasoning. For instance, freeze toy dinosaurs in ice and have students figure out how to extract them using warm water, tools, or salt . This fun, hands-on task sharpens their spatial problem-solving skills. Building projects are another great way to boost spatial learning. Here are a few examples: | Project Type | Materials Needed | Spatial Skills Developed | --- | Bridge Building | Cardboard, blocks | Visualization, balance | | Toy Parachutes | Tissue paper, string | Understanding vertical relationships | | Rain Gauge | Clear containers | Measurement and scale | These projects don’t just teach science - they also set the stage for applying spatial language and concepts in reading and literacy lessons. Reading and Word Skills Linking spatial concepts to reading activities adds depth to daily lessons. For example, the book Make Way for Ducklings can be used to reinforce positional vocabulary . Similarly, with Rosie's Walk, you can create a simple map of the farm setting and use character cutouts to track movements throughout the story. These mapping activities help students connect spatial reasoning to literacy . Turn storytime into an opportunity for spatial learning by: Picking books with vivid visual settings and geographic elements. Using character cutouts to act out movements on maps. Encouraging students to create their own story maps with basic shapes and symbols. These activities not only make reading more interactive but also tie together math, science, and literacy in a meaningful way. Conclusion: Setting Up for Success Spatial skills play a crucial role in early education, serving as a foundation for success in STEM fields, the arts, and everyday problem-solving tasks . These insights pave the way for practical strategies that can easily be incorporated into daily routines to enhance spatial learning. Research shows that children who excel in the spatial and geometric sections of the TIMSS often perform better in overall math . To help develop these skills, here are some effective activities: | Activity Type | Benefits | How to Implement | --- | Daily Conversations | Expands spatial vocabulary | Use words like "over", "under", and "between" in everyday discussions. | | Construction Play | Boosts mental rotation abilities | Provide blocks or LEGOs for 15–20 minutes of play each day. | | Visual Arts | Develops 2D–3D understanding | Encourage drawing from different perspectives. | | Physical Movement | Enhances spatial awareness | Organize outdoor activities that involve exploration. | These activities not only support the development of spatial skills but also blend seamlessly into everyday learning across various subjects. By incorporating them regularly, educators and parents can foster noticeable improvements in a child's spatial abilities. "Spatial activities are fun and engaging for students. Most of all, every student can improve their spatial skills, and these spatial skills can set them up for success in STEM, the arts, and everyday life." – Edutopia With consistent practice and an emphasis on spatial activities, children can build a strong foundation for future learning and problem-solving, setting them up for long-term success. Regular engagement with these strategies makes a significant difference in developing these essential skills. FAQs Why are spatial skills in kindergarten important for a child's future in STEM? Spatial skills nurtured during kindergarten are crucial for a child's future success in STEM fields - science, technology, engineering, and mathematics. Studies reveal a strong connection between well-developed spatial abilities and higher math performance, as well as a greater chance of pursuing careers in STEM. Activities like building with blocks, solving puzzles, and engaging in imaginative play do more than entertain. They enhance spatial reasoning and lay the groundwork for essential math skills. These early experiences create a solid foundation for tackling complex problems and learning in STEM disciplines later in life. Developing spatial skills at a young age is a meaningful step toward shaping a child's academic and professional future. What are some easy and fun ways to help my child develop spatial skills at home? You can help your child develop spatial skills right at home with fun and simple activities. Building blocks and puzzles are fantastic for boosting problem-solving abilities and spatial awareness. Whether they’re stacking blocks into a tower or completing a puzzle, these activities encourage kids to explore shapes and how they fit together. Creative activities like art and crafts are another great option. Drawing, painting, or making collages helps children think about dimensions and proportions in a hands-on way. For a bit of outdoor fun, try a scavenger hunt. It’s a playful way for kids to practice navigating their surroundings and using spatial reasoning to find hidden treasures. Don’t forget to weave spatial language into your daily conversations. Words like "next to", "above", or "below" can help your child connect language with spatial concepts, making these ideas feel more natural and relatable. Why is spatial vocabulary important for kids, and how can we help them learn it? Why Spatial Vocabulary Matters for Kids Spatial vocabulary plays a big role in a child's early development. Words like "above", "below", or "next to" do more than just describe positions - they lay the groundwork for critical skills like problem-solving, navigation, and understanding math and science concepts. Studies even show that kids exposed to more of these spatial terms tend to excel in spatial reasoning tasks and build stronger cognitive abilities overall. So, how can you help children learn this important vocabulary? The key is to make it fun and interactive. For instance, during block play or puzzles, encourage kids to talk about where objects are, like saying a block is "on top of" another or "next to" it. Everyday moments work too - ask them to place something "under" the table or "beside" the chair. You can also turn learning into an adventure with games like treasure hunts, where they follow or give directions using spatial words. These playful activities make learning natural and enjoyable while reinforcing essential concepts. Related Blog Posts 5 Signs Your Child is Ready for Early Math Learning Role of Spatial Skills in Early Math Learning What Math do Kindergarteners Learn? Interactive Geometry Tools for Kids Transform Math Learning for Kids Explore Funexpected's interactive math program designed for children aged 3-7. Build math fluency, logic, and problem-solving skills through engaging, hands-on activities and a personalized digital tutor. #### Get Started Today ‹ Ultimate Guide to Math Talk for Parents
188879
https://pmc.ncbi.nlm.nih.gov/articles/PMC4402907/
Comparison of the efficacy of four cholinesterase inhibitors in combination with memantine for the treatment of Alzheimer’s disease - PMC Skip to main content An official website of the United States government Here's how you know Here's how you know Official websites use .gov A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites. Search Log in Dashboard Publications Account settings Log out Search… Search NCBI Primary site navigation Search Logged in as: Dashboard Publications Account settings Log in Search PMC Full-Text Archive Search in PMC Journal List User Guide Download PDF Add to Collections Cite Permalink PERMALINK Copy As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice Int J Clin Exp Med . 2015 Feb 15;8(2):2944–2948. Search in PMC Search in PubMed View in NLM Catalog Add to search Comparison of the efficacy of four cholinesterase inhibitors in combination with memantine for the treatment of Alzheimer’s disease Zi-Qiang Shao Zi-Qiang Shao 1 Department of Neurology, China-Jan Friendship Hospital, Beijing 100029, China Find articles by Zi-Qiang Shao 1 Author information Article notes Copyright and License information 1 Department of Neurology, China-Jan Friendship Hospital, Beijing 100029, China ✉ Address correspondence to: Dr. Zi-Qiang Shao, Department of Neurology, China-Jan Friendship Hospital, YinghuaDongjie, Chaoyang District, Beijing 100029, China. Tel: +86-10-84205259; Fax: +86-10-84205259; E-mail: shangziqiang126@sina.com Received 2014 Nov 21; Accepted 2015 Jan 28; Collection date 2015. IJCEM Copyright © 2015 PMC Copyright notice PMCID: PMC4402907 PMID: 25932260 Abstract Background: Combined use of memantine and acetylcholinesterase inhibitors (AChEIs) has shown improved outcomes in patients with Alzheimer’s disease (AD). However, it is not clear which AChEI is the optimal for the combined treatment with memantine. Methods: A total of 110 AD patients were randomized to receive memantine and one of the following add-on drugs: placebo, donepezil, rivastigmine, galantamine, and huperzine A for 24 weeks (n=22). At baseline, 12 weeks, and 24 weeks, the patients were evaluated using mini-mental state examination (MMSE) and Alzheimer Disease Cooperative Study-Activities of Daily Living (ADCS-ADL) scales. Adverse events were recorded to analyze the safety profile. Results: The MMSE scores were significantly increased and the ADL scores were significantly decreased at 12 weeks and 24 weeks in all five groups compared with baseline (all P<0.01). At 24 weeks, patients treated with memantine+huperzine A showed better MMSE and ADL scores than those treated with memantine+placebo. Conclusions: Huperzine A may be an optimal choice for the combined therapy with memantine in treating AD. Keywords: Alzheimer’s disease, memantine, huperzine A, drug therapy Introduction Alzheimer’s disease (AD) is the most common reason of dementia, and its prevalence is increasing globally . It is estimated that there will one AD patient in every 85 individuals by the year 2050 . In addition, AD causes significant emotional and financial burdens to the patient’s family and society. The pathogenic mechanisms of AD are still not clear. It has been shown that various factors are involved in the development of AD, such as genetic background, environment, behavior, and developmental components. Currently, there are four acetylcholinesterase inhibitors (AChEIs) approved by the U.S. Food and Drug Administration for the treatment of AD, including tacrine, donepezil, rivastigmine, and galantamine. However, tacrine has been rarely used due to its hepatotoxicity . Huperzine A is a natural product isolated from the Chinese moss shrub (Huperziaserrata) with acetylcholinesterase inhibiting effects . It has shown promising safety profile in a phase II trial in patients with mild to moderate AD . Another drug, memantine, is approved for clinical use in moderate to severe AD with convincing evidence of efficacy [6,7]. However, its role in early AD is unclear. The combined use of memantine and AChEI has shown improved efficacy and patient outcome in the treatment of AD [8-14]. However, there is no report comparing the efficacy of different AChEI in combination with memantine for the treatment of AD. In this trial, we investigated the outcome of AD patients treated with memantine combined with one of the four AChEIs, donepezil, rivastigmine, galantamine, and hu-perzine A. Materials and methods Patients This clinical trial included 110 consecutive patients treated at our hospital from October 2009 to September 2013. The diagnosis of AD was made according to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) . All patients had mild to moderate symptoms with mini–mental state examination (MMSE) scores of 10-24. Patients with the following conditions were excluded from this study: vascular or mixed dementia; epilepsy; depression; schizophrenia; administration of other psychotropic drugs within prior two weeks; allergy to memantine or AChEIs. This study was approved by the Ethics Committee of our hospital. Written informed consent was obtained from the patients or their families. Treatment All patients were randomly assigned into five groups (n=22) and treated with memantine (XinyiJiufu, Shanghai, China) and one of the following add-on drugs: placebo, donepezil (Haosen, China), rivastigmine (Novartis, China), galantamine (Tianpu, China), and huperzine A (Fuhua, China). The doses of the drugs are listed in Table 1. All patients underwent a washout period of one week before the initiation of treatment. The treatment lasted for 24 weeks. Table 1. Doses of the drugs | Drugs | Week 1 | Week 2 | Week 3 | Week 4 | Week 5-24 | :---: :---: :---: | | Memantine | 5 mg in the morning | 5 mg, twice daily | 10 mg in the morning, 5 mg in the afternoon | 10 mg, twice daily | 10 mg, twice daily | | Donepezil | 5 mg before bed | 5 mg before bed | 5 mg before bed | 5 mg before bed | 10 mg before bed | | Rivastigmine | 1.5 mg, twice daily | 1.5 mg, twice daily | 1.5 mg, twice daily | 1.5 mg, twice daily | 3 mg, twice daily | | Galantamine | 2 mg, twice daily | 2 mg, twice daily | 4 mg, twice daily | 4 mg, twice daily | 6 mg, twice daily | | Huperzine A | 100 μg, twice daily | 100 μg, twice daily | 100 μg, twice daily | 100 μg, twice daily | 100 μg, twice daily | | Placebo | One tablet, twice daily | One tablet, twice daily | One tablet, twice daily | One tablet, twice daily | One tablet, twice daily | Open in a new tab Outcome measurement At baseline, 12 weeks, and 24 week, the patients were evaluated using the MMSE and Alzheimer Disease Cooperative Study-Activities of Daily Living (ADCS-ADL) scales. Safety profile Adverse effects such as nausea, vomiting, and dizziness were recorded. Electrocardiography and blood and urine biochemistry were performed at baseline and every four weeks during the treatment. Statistical analysis Continuous data were represented as mean ± standard deviation (SD) and compared with one-way ANOVA. Categorical data were compared with X 2 test. Statistical analysis was performed using SPSS 12.0 software (SPSS, Chicago, IL). A P-value less than 0.05 was considered statistically significant. Results Patient demographics This study included 53 males and 57 females with a mean age of 73.17±6.94 years (range 56-84 years). The mean disease course was 3.48±2.02 years (range 1-9 years). No significant difference was found in sex, age, disease course, and MMSE/ADL at baseline between the five groups (Table 2). Table 2. Patient demographics | | Memantine+placebo (n=22) | Memantine+donepezil (n=22) | Memantine+rivastigmine (n=22) | Memantine+galantamine (n=22) | Memantine+huperzine A (n=22) | :---: :---: :---: | | Male/female | 11/11 | 10/12 | 11/11 | 11/11 | 10/12 | | Age (year) | 73.04±7.10 | 73.40±6.04 | 73.13±7.08 | 73.36±7.81 | 72.90±7.17 | | Disease course (year) | 3.59±2.15 | 3.45±1.99 | 3.68±2.16 | 3.31±1.67 | 3.36±2.25 | | MMSE | 15.27±1.60 | 15.09±1.77 | 15.40±1.73 | 15.36±1.76 | 15.45±1.73 | | ADL | 35.45±1.84 | 35.13±2.09 | 35.40±2.08 | 35.04±1.91 | 35.27±1.98 | Open in a new tab Treatment outcomes The MMSE scores were significantly increased and the ADL scores were significantly decreased at 12 weeks and 24 weeks in all five groups compared with baseline (all P<0.01). At 12 weeks, no significant difference in MMSE scores was found among the groups. At 24 weeks, patients treated with memantine plus huperzine A showed significantly higher MMSE scores than those treated with memantine plus placebo (P<0.05). At 12 weeks, both galantamine and huperzine A as add-ons to memantine significantly decreased the ADL scores compared with memantine alone (both P<0.05). However, only memantine plus huperzine A showed significantly decreased ADL in comparison with memantine alone at 24 weeks (P<0.05). These results indicate that memantine plus huperzine A is superior to other drug combinations in terms of MMSE and ADL scores at 24 weeks (Table 3). Table 3. MMSE and ADL scores at 24 weeks | | Memantine+placebo (n=22) | Memantine+donepezil (n=22) | Memantine+rivastigmine (n=22) | Memantine+galantamine (n=22) | Memantine+huperzine A (n=22) | :---: :---: :---: | | MMSE at base line | 15.27±1.60 | 15.09±1.77 | 15.40±1.73 | 15.36±1.76 | 15.45±1.73 | | MMSE at 12 weeks | 17.72±2.09 | 18.00±2.37 | 18.09±2.34 | 18.50±2.54 | 18.36±2.44 | | MMSE at 24 weeks | 18.90±2.54 | 19.27±2.22 | 19.31±2.80 | 19.72±2.18 | 22.18±1.81 | | ADL at base line | 35.45±1.84 | 35.13±2.09 | 35.40±2.08 | 35.04±1.91 | 35.27±1.98 | | ADL at 12 weeks | 32.77±2.32 | 31.86±2.39 | 32.00±2.63 | 31.04±2.64 | 31.54±2.17 | | ADL at 24 weeks | 32.04±1.81 | 31.40±2.36 | 31.86±2.37 | 30.90±2.61 | 28.04±3.21 | Open in a new tab MMSE, mini-mental state examination; ADL, Alzheimer Disease Cooperative Study-Activities of Daily Living. vs memantine+placebo. Adverse effects During the 24-week treatment, three patients withdrew due to severe adverse effects, including two patients with severe nausea and vomiting in the memantine+rivastigmine group, and one patient with hepatotoxicity in the memantine+donepezil group. Other patients had experienced mild adverse effects and resolved spontaneously. No remarkable changes were noticed in blood pressure, heart rate, electrocardiography, and biochemistry during the treatment. The incidence of adverse effects did not differ significantly among the five groups (Table 4). Table 4. Incidence of adverse effects | | Memantine+placebo (n=22) | Memantine+donepezil (n=22) | Memantine+rivastigmine (n=22) | Memantine+galantamine (n=22) | Memantine+huperzine A (n=22) | :---: :---: :---: | | Patients with adverse effects (n) | 5 | 7 | 8 | 6 | 7 | | Incidence of adverse effects | 22.72% | 31.81% | 36.36% | 27.27% | 31.81% | Open in a new tab Discussion In this preliminary trial, we compared the efficacy of four AChEIs in combination with memantine in the treatment of AD. The results showed that memantine+huperzine A was superior to other three AChEIs in combination with memantine in improving MMSE and ADL scores at 12 and 24 weeks. The cholinergic neurons are the main neurotransmitter system involved in AD and basal forebrain cholinergic loss is an important pathologic process. These neurons maintain cortical activity, cerebral blood flow, modulate cognition, learning, task and memory related activities, and regulation of sleep-wake cycles [18,19]. Considering the many functions of the cholinergic neurons, the symptom complex in AD can at least be partially understood. AChEI enhance cholinergic neurotransmission through inhibition of acetylcholinesterase, thus decreasing the breakdown of acetylcholine. Various short term trials with AChEI monotherapy have shown clinically apparent and encouraging improvement in cognitive function, slowed the pace of functional decline or clinical worsening compared with placebo and reduced behavioral symptoms in mild-to-moderate and moderate-to-severe AD patients; data from meta-analyses also attest to the same fact . In the pathological state of AD, there is a low and persistent state of N-methyl-D-aspartate (NMDA) activation even at resting periods; in such states, Mg 2+ ions are excluded from the channel, thereby, allowing continuous Ca 2+ flow across the membrane. Memantine is an uncompetitive NMDA antagonist, has voltage dependency, rapid blocking kinetics, moderate affinity, and blocks the channel by trapping it in open conformation . The moderate affinity and voltage dependency property of memantine allows it to block the persistent NMDA activation and is thus, beneficial in AD. The three main studies that have seen the role of memantine in mild to moderate AD show there are some beneficial effects on cognitive and global functioning status, but it does not impede the progression of disease [11,22,23]. A recent meta-analysis also indicates the same . Memantine’s lack of benefit in the early stages is not well understood yet. The involvement of cholinergic neurons probably occurs early in the disease but, damage to glutamatergic system and excitotoxicdegeneration occurs late in the course of disease . However, a combined use of memantine and AChEIs has shown improved outcomes in AD patients [8-14]. In our study, all the four AChEIs in combination with memantine can significantly improve the MMSE and ADL scores of AD patients at 12 and 24 weeks. Among them, only huperzine A showed better efficacy compared with other AChEIs. Huperzine A, derived from the Chinese herb Huperziaserrata, was identified by scientists in China in the 1980s as a potent, reversible, selective inhibitor of acetylcholinesterase , which has a mechanism of action similar to donepezil, rivastigmine and galantamine. A large number of preclinical studies and clinical trials had shown the potential effect of huperzine A in treating AD. In conclusion, huperzine A may be an optimal choice for the combined therapy with memantine in treating AD. However, due to the relatively small sample size, this conclusion needs further investigation. Disclosure of conflict of interest None. References 1.Ferri CP, Prince M, Brayne C, Brodaty H, Fratiglioni L, Ganguli M, Hall K, Hasegawa K, Hendrie H, Huang Y, Jorm A, Mathers C, Menezes PR, Rimmer E, Scazufca M Alzheimer’s Disease International. Global prevalence of dementia: a Delphi consensus study. Lancet. 2005;366:2112–7. doi: 10.1016/S0140-6736(05)67889-0. [DOI] [PMC free article] [PubMed] [Google Scholar] 2.Brookmeyer R, Johnson E, Ziegler-Graham K, Arrighi HM. Forecasting the global burden of Alzheimer’s disease. Alzheimers Dement. 2007;3:186–91. doi: 10.1016/j.jalz.2007.04.381. [DOI] [PubMed] [Google Scholar] 3.Watkins PB, Zimmerman HJ, Knapp MJ, Gracon SI, Lewis KW. Hepatotoxic effects of tacrine administration in patients with Alzhe-imer’s disease. JAMA. 1994;271:992–8. [PubMed] [Google Scholar] 4.Zhang HY, Yan H, Tang XC. Non-cholinergic effects of huperzine A: beyond inhibition of acetylcholinesterase. Cell Mol Neurobiol. 2008;28:173–83. doi: 10.1007/s10571-007-9163-z. [DOI] [PMC free article] [PubMed] [Google Scholar] 5.Rafii MS, Walsh S, Little JT, Behan K, Reynolds B, Ward C, Jin S, Thomas R, Aisen PS Alzheimer’s Disease Cooperative Study. A phase II trial of huperzine A in mild to moderate Alzheimer disease. Neurology. 2011;76:1389–94. doi: 10.1212/WNL.0b013e318216eb7b. [DOI] [PMC free article] [PubMed] [Google Scholar] 6.Hellweg R, Wirth Y, Janetzky W, Hartmann S. Efficacy of memantine in delaying clinical worsening in Alzheimer's disease (AD): responder analyses of nine clinical trials with patients with moderate to severe AD. Int J Geriatr Psychiatry. 2012;27:651–6. doi: 10.1002/gps.2766. [DOI] [PubMed] [Google Scholar] 7.McShane R, Areosa Sastre A, Minakaran N. Memantine for dementia. Cochrane Database Syst Rev. 2006;2:CD003154. doi: 10.1002/14651858.CD003154.pub5. [DOI] [PubMed] [Google Scholar] 8.Atri A, Molinuevo JL, Lemming O, Wirth Y, Pulte I, Wilkinson D. Memantine in patients with Alzheimer’s disease receiving donepezil: new analyses of efficacy and safety for combination therapy. Alzheimers Res Ther. 2013;5:6. doi: 10.1186/alzrt160. [DOI] [PMC free article] [PubMed] [Google Scholar] 9.Howard R, McShane R, Lindesay J, Ritchie C, Baldwin A, Barber R, Burns A, Dening T, Findlay D, Holmes C, Hughes A, Jacoby R, Jones R, Jones R, McKeith I, Macharouthu A, O'Brien J, Passmore P, Sheehan B, Juszczak E, Katona C, Hills R, Knapp M, Ballard C, Brown R, Banerjee S, Onions C, Griffin M, Adams J, Gray R, Johnson T, Bentham P, Phillips P. Donepezil and memantine for moderate-to-severe Alzheimer’s disease. N Engl J Med. 2012;366:893–903. doi: 10.1056/NEJMoa1106668. [DOI] [PubMed] [Google Scholar] 10.Lopez OL, Becker JT, Wahed AS, Saxton J, Sweet RA, Wolk DA, Klunk W, Dekosky ST. Long-term effects of the concomitant use of memantine with cholinesterase inhibition in Alzheimer disease. J Neurol Neurosurg Psychiatry. 2009;80:600–7. doi: 10.1136/jnnp.2008.158964. [DOI] [PMC free article] [PubMed] [Google Scholar] 11.Porsteinsson AP, Grossberg GT, Mintzer J, Olin JT Memantine MEM-MD-12 Study Group. Memantine treatment in patients with mild to moderate Alzheimer’s disease already receiving a cholinesterase inhibitor: a randomized, double-blind, placebo-controlled trial. Curr Alzheimer Res. 2008;5:83–9. doi: 10.2174/156720508783884576. [DOI] [PubMed] [Google Scholar] 12.Riepe MW, Adler G, Ibach B, Weinkauf B, Tracik F, Gunay I. Domain-specific improvement of cognition on memantine in patients with Alzheimer’s disease treated with rivastigmine. Dement Geriatr Cogn Disord. 2007;23:301–6. doi: 10.1159/000100875. [DOI] [PubMed] [Google Scholar] 13.Dantoine T, Auriacombe S, Sarazin M, Becker H, Pere JJ, Bourdeix I. Rivastigmine monotherapy and combination therapy with memantine in patients with moderately severe Alzheimer’s disease who failed to benefit from previous cholinesterase inhibitor treatment. Int J Clin Pract. 2006;60:110–8. doi: 10.1111/j.1368-5031.2005.00769.x. [DOI] [PubMed] [Google Scholar] 14.Tariot PN, Farlow MR, Grossberg GT, Graham SM, McDonald S, Gergel I Memantine Study Group. Memantine treatment in patients with moderate to severe Alzheimer disease already receiving donepezil: a randomized controlled trial. JAMA. 2004;291:317–24. doi: 10.1001/jama.291.3.317. [DOI] [PubMed] [Google Scholar] 15.American Psychiatric Association. Diagnostic criteria from DSM-IV-TR. xii. Washington, D.C.: American Psychiatric Association; 2000. p. 370. [Google Scholar] 16.Folstein MF, Folstein SE, McHugh PR. “Mini-mental state”. A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975;12:189–98. doi: 10.1016/0022-3956(75)90026-6. [DOI] [PubMed] [Google Scholar] 17.Galasko D, Bennett D, Sano M, Ernesto C, Thomas R, Grundman M, Ferris S. An inventory to assess activities of daily living for clinical trials in Alzheimer’s disease. The Alzheimer’s Disease Cooperative Study. Alzheimer Dis Assoc Disord. 1997;11(Suppl 2):S33–9. [PubMed] [Google Scholar] 18.Schliebs R, Arendt T. The significance of the cholinergic system in the brain during aging and in Alzheimer’s disease. J Neural Transm. 2006;113:1625–44. doi: 10.1007/s00702-006-0579-2. [DOI] [PubMed] [Google Scholar] 19.Berger-Sweeney J. The cholinergic basal forebrain system during development and its influence on cognitive processes: important questions and potential answers. Neurosci Biobehav Rev. 2003;27:401–11. doi: 10.1016/s0149-7634(03)00070-8. [DOI] [PubMed] [Google Scholar] 20.Birks J. Cholinesterase inhibitors for Alzheimer’1(1):s disease. Cochrane Database Syst Rev. 2006:CD005593. doi: 10.1002/14651858.CD005593. [DOI] [PMC free article] [PubMed] [Google Scholar] 21.Gilling KE, Jatzke C, Hechenberger M, Parsons CG. Potency, voltage-dependency, agonist concentration-dependency, blocking kinetics and partial untrapping of the uncompetitive N-methyl-D-aspartate (NMDA) channel blocker memantine at human NMDA (GluN1/GluN2A) receptors. Neuropharmacology. 2009;56:866–75. doi: 10.1016/j.neuropharm.2009.01.012. [DOI] [PubMed] [Google Scholar] 22.Bakchine S, Loft H. Memantine treatment in patients with mild to moderate Alzheimer's disease: results of a randomised, double-blind, placebo-controlled 6-month study. J Alzheimers Dis. 2008;13:97–107. doi: 10.3233/jad-2008-13110. [DOI] [PubMed] [Google Scholar] 23.Peskind ER, Potkin SG, Pomara N, Ott BR, Graham SM, Olin JT, McDonald S. Memantine treatment in mild to moderate Alzheimer disease: a 24-week randomized, controlled trial. Am J Geriatr Psychiatry. 2006;14:704–15. doi: 10.1097/01.JGP.0000224350.82719.83. [DOI] [PubMed] [Google Scholar] 24.Schneider LS, Dagerman KS, Higgins JP, Mchane R. Lack of evidence for the efficacy of memantine in mild Alzheimer disease. Arch Neurol. 2011;68:991–8. doi: 10.1001/archneurol.2011.69. [DOI] [PubMed] [Google Scholar] 25.Ni R, Marutle A, Nordberg A. Modulation of alpha7 nicotinic acetylcholine receptor and fibrillar amyloid-beta interactions in Alzheimer’s disease brain. J Alzheimers Dis. 2013;33:841–51. doi: 10.3233/JAD-2012-121447. [DOI] [PubMed] [Google Scholar] 26.Wang YE, Yue DX, Tang XC. [Anti-cholinesterase activity of huperzine A] . Zhongguo Yao Li Xue Bao. 1986;7:110–3. [PubMed] [Google Scholar] Articles from International Journal of Clinical and Experimental Medicine are provided here courtesy of e-Century Publishing Corporation ACTIONS PDF (227.6 KB) Cite Collections Permalink PERMALINK Copy RESOURCES Similar articles Cited by other articles Links to NCBI Databases On this page Abstract Introduction Materials and methods Results Discussion Disclosure of conflict of interest References Cite Copy Download .nbib.nbib Format: Add to Collections Create a new collection Add to an existing collection Name your collection Choose a collection Unable to load your collection due to an error Please try again Add Cancel Follow NCBI NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed Connect with NLM NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov Back to Top
188880
https://www.youtube.com/watch?v=8Kq8htzbBpc
The Theoretical Radiation Pattern of a Half-wave Dipole — Lesson 3 EMViso 14400 subscribers 288 likes Description 16061 views Posted: 10 Aug 2021 This lesson introduces the far-field radiation pattern of a half-wave dipole antenna, which, in theory, has perfect rotational symmetry with respect to 𝜙. It has nulls at θ=0° and at θ=180°, and its maximum is in the plane θ=90°. This course was created for Ansys Innovation Courses by Dr. Kathryn Leigh Smith, assistant professor, University of North Carolina-Charlotte, in partnership with Ansys. // INTERESTED IN MORE? Visit Ansys Innovation Courses for free courses that include videos, handouts, simulation examples with starting files, homework problems and quizzes. Visit today → // DOWNLOAD FREE ANSYS SOFTWARE Ansys offers free student product downloads for homework, capstone projects, student competitions, online learning and more! Download today → // QUESTIONS ABOUT THIS VIDEO OR USING ANSYS? Get help from Ansys experts and peers through the Ansys Learning Forum. Search for answers to common questions, browse discussion categories or ask your own question. Visit today → // STAY IN THE LOOP Follow our Ansys Academic LinkedIn showcase page for updates on learning resources, events, job opportunities, cutting-edge simulation content and more! Follow today → 5 comments Transcript: [Music] the far-field radiation pattern of a half-wave dipole in theory has perfect rotational symmetry with respect to phi so rotationally around the z axis it has nulls at theta equals 0 degrees and at theta equals 180 degrees and its maximum is in the plane where theta equals 90 degrees this is a graph of antenna directivity on the plane phi equals zero and phi equals 180 so this is as though we were looking at the antenna from somewhere on the x y plane you can see that radiation is broadside from the antenna outward in the xy plane and no radiation is going upward or downward this is another plot of the same radiation pattern but this time viewed from the positive z-axis on the theta equals 90 plane here you see the rotational symmetry of the radiation outward on the xy plane and we can also look at the radiation pattern in terms of a 3d plot so this is the radiation pattern viewed from the positive z-axis regions of high directivity are shaded red and as directivity decreases the coding color changes to orange to yellow to green to blue here's the same plot viewed from the positive x-axis and here is an oblique view from somewhere in the first octant 3-d plots are pretty difficult to interpret in static images but if we tilt it around a bit you can hopefully see what's going on and again you can see that the maximum directivity is broadside from the line of the antenna and rotationally symmetric about the wire while the minimum directivity is seen in the upward and downward directions at the design frequency where the length l of the antenna is equal to half wavelength the maximum directivity of the dipole antenna is approximately 1.64 in linear scale or 2.16 db remember that this can easily be increased by adding reflector and director elements as we saw with the agiuda antenna also at this frequency the input impedance of the antenna will be approximately equal to 73 plus j 42.5 ohms notice that this is inductive that's because of the fringing fields of the antenna as we discussed earlier here's a graph of the input impedance to show what's going on so here you have the real part and the imaginary part of the input impedance plotted on the same axis and for the purposes of this example i used a design frequency of 5 gigahertz so this antenna is exactly halfway length long at five gigahertz so here's the design frequency of the antenna on the graph and you can see that the input impedance here has a real part of approximately 73 ohms and an imaginary part of approximately 42.5 ohms which is inductive we can also observe here that the resonance of this antenna which is defined by a zero ohm reactance is actually happening at this slightly lower frequency and this is where the antenna will actually want to operate it's also worth noting that at this frequency the real part of the input impedance is lower than 73 ohms and will more closely match a 50 ohm feed line again in order to shift the operating frequency up to match your target design frequency all you would need to do is slightly trim the ends of the antenna to compensate for the effective length added by the fringing fields you
188881
https://pubchem.ncbi.nlm.nih.gov/element/Boron
Boron | B (Element) - PubChem An official website of the United States government Here is how you know The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. NIH National Library of Medicine NCBI PubChem About Docs Submit Contact Search PubChem Periodic Tableelement Summary Boron Boron is a chemical element with symbol B and atomic number 5. Classified as a metalloid, Boron is a solid at 25°C (room temperature). 5 B Boron | Atomic Mass | 10.81 u | | Electron Configuration | [He]2s 2 2p 1 | | Oxidation States | +3 | | Year Discovered | 1808 | View All Properties H He Li Be B C N O F Ne Na Mg Al Si P S Cl Ar K Ca Sc Ti V Cr Mn Fe Co Ni Cu Zn Ga Ge As Se Br Kr Rb Sr Y Zr Nb Mo Tc Ru Rh Pd Ag Cd In Sn Sb Te I Xe Cs Ba La Ce Pr Nd Pm Sm Eu Gd Tb Dy Ho Er Tm Yb Lu Hf Ta W Re Os Ir Pt Au Hg Tl Pb Bi Po At Rn Fr Ra Ac Th Pa U Np Pu Am Cm Bk Cf Es Fm Md No Lr Rf Db Sg Bh Hs Mt Ds Rg Cn Nh Fl Mc Lv Ts Og 5 B Boron | Atomic Mass | 10.81 u | | Electron Configuration | [He]2s 2 2p 1 | | Oxidation States | +3 | | Year Discovered | 1808 | View All Properties 1 Identifiers 1.1 Element Name Boron PubChem; IUPAC Commission on Isotopic Abundances and Atomic Weights (CIAAW) 1.2 Element Symbol B PubChem; IUPAC Commission on Isotopic Abundances and Atomic Weights (CIAAW) 1.3 InChI InChI=1S/B PubChem 1.4 InChIKey ZOXJGFHDIHLPTG-UHFFFAOYSA-N PubChem 2 Properties 2.1 Atomic Weight [10.806, 10.821] IUPAC Commission on Isotopic Abundances and Atomic Weights (CIAAW) 10.811 Jefferson Lab, U.S. Department of Energy 10.81 Los Alamos National Laboratory, U.S. Department of Energy [10.806,10.821] NIST Physical Measurement Laboratory 2.2 Electron Configuration [He]2s 2 2p 1 Los Alamos National Laboratory, U.S. Department of Energy 2.3 Atomic Radius Van der Waals Atomic Radius 192 pm (Van der Waals) Los Alamos National Laboratory, U.S. Department of Energy Empirical Atomic Radius 85 pm (Empirical) J.C. Slater, J Chem Phys, 1964, 41(10), 3199-3205. DOI:10.1063/1.1725697 PubChem Elements Covalent Atomic Radius 84(3) pm (Covalent) B. Cordero, V. Gómez, A.E. Platero-Prats, M. Revés, J. Echeverría, E. Cremades, F. Barragán, S. Alvarez, Dalton Trans. 2008, 21, 2832-2838. DOI:10.1039/b801115jPMID:18478144. PubChem Elements 2.4 Oxidation States +3 Jefferson Lab, U.S. Department of Energy 3, 2, 1, -1, -5 ​(a mildly acidic oxide) Los Alamos National Laboratory, U.S. Department of Energy 2.5 Ground Level 2 P°1/2 NIST Physical Measurement Laboratory 2.6 Ionization Energy 8.298 eV Jefferson Lab, U.S. Department of Energy 8.298019 ± 0.000003 eV Extra data for NIST internal use: Abstract PDF A Critical Compilation of Energy Levels and Spectral Lines of Neutral Boron, A. E. Kramida and A. N. Ryabtsev, Phys. Scr. 76, 544–557 (2007) DOI:10.1088/0031-8949/76/5/024 NIST Physical Measurement Laboratory 2.7 Electronegativity Pauling Scale Electronegativity 2.04 (Pauling Scale) A.L. Allred, J. Inorg. Nucl. Chem., 1961, 17(3-4), 215-221. DOI:10.1016/0022-1902(61)80142-5 PubChem Elements Allen Scale Electronegativity 2.051 (Allen Scale) L.C. Allen, J. Am. Chem. Soc., 1989, 111, 9003. DOI:10.1021/ja00207a003 J.B. Mann, T.L. Meek and L.C. Allen, J. Am. Chem. Soc., 2000, 122, 2780. DOI:10.1021/ja992866e J.B. Mann, T.L. Meek, E.T. Knight, J.F. Capitani and L.C Allen, J. Am. Chem. Soc., 2000, 122, 5132. DOI:10.1021/ja9928677 PubChem Elements 2.8 Electron Affinity 0.277 eV R.T Myers, J. Chem. Edu., 1990, 67(4), 307. DOI:10.1021/ed067p307 PubChem Elements 0.18 eV R.J. Zollweg, J. Chem. Phys., 1969, 50, 4251. DOI:10.1063/1.1670890 PubChem Elements 2.9 Atomic Spectra Lines Holdings NIST Physical Measurement Laboratory Levels Holdings NIST Physical Measurement Laboratory 2.10 Physical Description Solid Jefferson Lab, U.S. Department of Energy 2.11 Element Classification Semi-metal Jefferson Lab, U.S. Department of Energy 2.12 Element Period Number 2 Jefferson Lab, U.S. Department of Energy 2.13 Element Group Number 13 Jefferson Lab, U.S. Department of Energy 2.14 Density 2.37 grams per cubic centimeter Jefferson Lab, U.S. Department of Energy 2.15 Melting Point 2348 K (2075°C or 3767°F) Jefferson Lab, U.S. Department of Energy 2076°C Los Alamos National Laboratory, U.S. Department of Energy 2.16 Boiling Point 4273 K (4000°C or 7232°F) Jefferson Lab, U.S. Department of Energy 3927°C Los Alamos National Laboratory, U.S. Department of Energy 2.17 Estimated Crustal Abundance 1.0×10 1 milligrams per kilogram Jefferson Lab, U.S. Department of Energy 2.18 Estimated Oceanic Abundance 4.44 milligrams per liter Jefferson Lab, U.S. Department of Energy 3 History The name derives from the Arabic buraq for "white". Although its compounds were known for thousands of years, it was not isolated until 1808 by the French chemists Louis-Joseph Gay-Lussac and Louis-Jacques Thenard. IUPAC Commission on Isotopic Abundances and Atomic Weights (CIAAW) Boron was discovered by Joseph-Louis Gay-Lussac and Louis-Jaques Thénard, French chemists, and independently by Sir Humphry Davy, an English chemist, in 1808. They all isolated boron by combining boric acid (H 3 BO 3) with potassium. Today, boron is obtained by heating borax (Na 2 B 4 O 7·10H 2 O) with carbon, although other methods are used if high-purity boron is required. Jefferson Lab, U.S. Department of Energy From the Arabic word Buraq, Persian Burah. Boron compounds have been known for thousands of years, but the element was not discovered until 1808 by Sir Humphry Davy and by Gay-Lussac and Thenard. Los Alamos National Laboratory, U.S. Department of Energy 3.1 Historical Atomic Weights Year Atomic Weight (uncertainty) [u] Reference Year 2009 Atomic Weight (uncertainty) [u] [10.806, 10.821] Reference doi:10.1351/PAC-REP-10-09-14 Year 1995 Atomic Weight (uncertainty) [u] 10.811(7) Reference doi:10.1351/pac199668122339 Year 1983 Atomic Weight (uncertainty) [u] 10.811(5) Reference doi:10.1351/pac198456060653 Year 1969 Atomic Weight (uncertainty) [u] 10.81(1) Reference doi:10.1351/pac197021010091 Year 1961 Atomic Weight (uncertainty) [u] 10.811(3) Reference doi:10.1021/ja00881a001 Year 1925 Atomic Weight (uncertainty) [u] 10.82 Reference doi:10.1039/CT9252700913 Year 1920 Atomic Weight (uncertainty) [u] 10.9 Reference doi:10.1021/ja02233a600 Year 1902 Atomic Weight (uncertainty) [u] 11 Reference doi:10.1007/BF01370337 IUPAC Commission on Isotopic Abundances and Atomic Weights (CIAAW) 3.2 Historical Isotopic Abundances Year Isotope Abundance (uncertainty) Reference Year 2013 Isotope 10 B Abundance (uncertainty) [0.189, 0.204] Reference doi:10.1515/pac-2015-0503 Year 2013 Isotope 11 B Abundance (uncertainty) [0.796, 0.811] Reference doi:10.1515/pac-2015-0503 Year 1997 Isotope 10 B Abundance (uncertainty) 0.199(7) Reference doi:10.1351/pac199870010217 Year 1997 Isotope 11 B Abundance (uncertainty) 0.801(7) Reference doi:10.1351/pac199870010217 Year 1981 Isotope 10 B Abundance (uncertainty) 0.199(2) Reference doi:10.1351/pac198355071119 Year 1981 Isotope 11 B Abundance (uncertainty) 0.801(2) Reference doi:10.1351/pac198355071119 Year 1975 Isotope 10 B Abundance (uncertainty) 0.2 Reference doi:10.1351/pac197647010075 Year 1975 Isotope 11 B Abundance (uncertainty) 0.8 Reference doi:10.1351/pac197647010075 IUPAC Commission on Isotopic Abundances and Atomic Weights (CIAAW) 4 Uses Boron is used in pyrotechnics and flares to produce a green color. Boron has also been used in some rockets as an ignition source. Boron-10, one of the naturally occurring isotopes of boron, is a good absorber of neutrons and is used in the control rods of nuclear reactors, as a radiation shield and as a neutron detector. Boron filaments are used in the aerospace industry because of their high-strength and lightweight. Boron forms several commercially important compounds. The most important boron compound is sodium borate pentahydrate (Na 2 B 4 O 7·5H 2 O). Large amounts of this compound are used in the manufacture of fiberglass insulation and sodium perborate bleach. The second most important compound is boric acid (H 3 BO 3), which is used to manufacture textile fiberglass and is used in cellulose insulation as a flame retardant. Sodium borate decahydrate (Na 2 B 4 O 7·10H 2 O), better known as borax, is the third most important boron compound. Borax is used in laundry products and as a mild antiseptic. Borax is also a key ingredient in a substance known as Oobleck, a strange material 6th grade students experiment with while participating in Jefferson Lab's BEAMS program. Other boron compounds are used to make borosilicate glasses, enamels for covering steel and as a potential medicine for treating arthritis. Jefferson Lab, U.S. Department of Energy Amorphous boron is used in pyrotechnic flares to provide a distinctive green color, and in rockets as an igniter. By far the most commercially important boron compound in terms of dollar sales is Na 2 B 4 O 7 • 5H 2 O. This pentahydrate is used in very large quantities in the manufacture of insulation fiberglass and sodium perborate bleach. Boric acid is also an important boron compound with major markets in textile products. Use of borax as a mild antiseptic is minor in economical terms. Boron compounds are also extensively used in the manufacture of borosilicate glasses. Other boron compounds show promise in treating arthritis. The isotope boron-10 is used as a control for nuclear reactors, as a shield for nuclear radiation, and in instruments used for detecting neutrons. Boron nitride has remarkable properties and can be used to make a material as hard as diamond. The nitride also behaves like an electrical insulator but conducts heat like a metal. Boron also has lubricating properties similar to graphite. The hydrides are easily oxidized with considerable energy liberation, and have been studied for use as rocket fuels. Demand is increasing for boron filaments, a high-strength, lightweight material chiefly employed for advanced aerospace structures. Boron is similar to carbon in that it has a capacity to form stable covalently bonded molecular networks. Carbonates, metalloboranes, phosphacarboranes, and other families comprise thousands of compounds. Los Alamos National Laboratory, U.S. Department of Energy 5 Sources The element is not found free in nature, but occurs as orthoboric acid usually found in certain volcanic spring waters and as borates in boron and colemantie. Important sources of boron are ore rasorite (kernite) and tincal (borax ore). Both of these ores are found in the Mojave Desert. Tincal is the most important source of boron from the Mojave. Extensive borax deposits are also found in Turkey. Boron exists naturally as 19.78% 10 B isotope and 80.22% 11 B isotope. High-purity crystalline boron may be prepared by the vapor phase reduction of boron trichloride or tribromide with hydrogen on electrically heated filaments. The impure or amorphous, boron, a brownish-black powder, can be obtained by heating the trioxide with magnesium powder. Boron of 99.9999% purity has been produced and is available commercially. Elemental boron has an energy band gap of 1.50 to 1.56 eV, which is higher than that of either silicon or germanium. Los Alamos National Laboratory, U.S. Department of Energy 6 Compounds See more information at the Boron compound page. PubChem Elements 6.1 Element Forms CID Name Formula SMILES Molecular Weight CID 5462311 Name boron Formula B SMILES [B] Molecular Weight 10.81 CID 6337058 Name boron-10 Formula B SMILES [10B] Molecular Weight 10.0129369 CID 10125044 Name boron-11 Formula B SMILES [11B] Molecular Weight 11.0093052 CID 6328187 Name boron(1-) Formula B- SMILES [B-] Molecular Weight 10.81 CID 58665376 Name boron-17 Formula B SMILES [17B] Molecular Weight 17.047 CID 58665377 Name boron-12 Formula B SMILES [12B] Molecular Weight 12.01435 CID 11205712 Name boron-10(1-) Formula B- SMILES [10B-] Molecular Weight 10.0129369 CID 11701025 Name boron-13 Formula B SMILES [13B] Molecular Weight 13.01778 CID 16218343 Name boron-11(1-) Formula B- SMILES [11B-] Molecular Weight 11.0093052 PubChem Elements 7 Handling and Storage Elemental boron and the borates are not considered to be toxic, and they do not require special care in handling. However, some of the more exotic boron hydrogen compounds are definitely toxic and do require care. Los Alamos National Laboratory, U.S. Department of Energy 8 Isotopes Stable Isotope Count 2 Jefferson Lab, U.S. Department of Energy 8.1 Isotopes in Earth / Planetary Science Molecules, atoms, and ions of the stable isotopes of boron possess slightly different physical and chemical properties, and they commonly will be fractionated during physical, chemical, and biological processes, giving rise to variations in isotopic abundances and in atomic weights. Natural terrestrial materials show a substantial variation in boron isotopic abundance (Fig. IUPAC.5.1). The relative abundances of 10 B and 11 B have been used in a variety of environmental tracer applications , . The isotope-amount ratio n(11 B)/n(10 B) of boron in a water sample depends on the source of the water and region through which the water flows, and it may also be affected by some types of contamination, such as dissolved borate in domestic wastewater. Different water sources may have their own distinct boron isotopic composition, e.g. seawater versus water from continental sources (Fig. IUPAC.5.1). Fig. IUPAC.5.1: Variations in atomic weight with isotopic composition of selected boron-bearing materials (modified from ). M. W. Wieser, T. B. Coplen. Pure Appl Chem.83, 359 (2011). A. Vengosh, K. G. Heumann, S. Jaraske, R. Kasher. Environ. Sci. Technol.28, 1968 (1994). A. Vengosh. Biol. Trace Elem. Res.66, 145 (1998). IUPAC Periodic Table of the Elements and Isotopes (IPTEI) 8.2 Isotopes in Industry The large value of the absorption cross section of 10 B for thermal neutrons makes this isotope useful for counting neutrons. 10 B is being studied as a potential replacement for 3 He in radiation detectors , , . The large thermal absorption cross section of 10 B makes the isotope useful in control rods (Fig. IUPAC.5.2) . Fig. IUPAC.5.2: Diagram of a typical pressurized water reactor, which shows where the boron control rods can be inserted or withdrawn from the core (1). (Diagram Source: U.S. Nuclear Regulatory Commission) . G. V. Jean. Advancing Hidden Nuclear Material Detection, National Defense Industrial Association (2014), Feb. 28; L. Foulke. Director of Nuclear Education Outreach, University of Pittsburgh. Introduction to Reactivity and Reactor Control, IAEA Workshop on Desktop Simulation (2014), Feb. 22; P. Frame. Boron Trifluoride (BF3) Neutron Detectors, Oak Ridge Associated Universities (2014), Feb. 22; United States Nuclear Regulatory Commission. Pressurized Water Reactors, U.S. Nuclear Regulatory Commission (2014), Feb. 22; IUPAC Periodic Table of the Elements and Isotopes (IPTEI) 8.3 Isotopes in Medicine 10 B has a high thermal neutron absorption cross section and can readily absorb neutrons via the reaction 10 B+n→ 7 Li+α. The alpha particles resulting from this reaction carry away a relatively large kinetic energy and are useful for the treatment of malignant tumors in cancer patients , , . D. Gabel. Radiother Oncol.30, 199 (1994). D. N. Slatkin. Neutron News1, 25 (1990). R. F. Barth, J. A. Coderre, M. C. G. Vicente, T. E. Blue. Clin. Cancer Res.11, 3987 (2005). IUPAC Periodic Table of the Elements and Isotopes (IPTEI) 8.4 Isotope Mass and Abundance 1 of 2 items Isotope Atomic Mass (uncertainty) [u] Abundance (uncertainty) Isotope 10 B Atomic Mass (uncertainty) [u] 10.012 9369(1) Abundance (uncertainty) [0.189, 0.204] Isotope 11 B Atomic Mass (uncertainty) [u] 11.009 305 17(8) Abundance (uncertainty) [0.796, 0.811] IUPAC Commission on Isotopic Abundances and Atomic Weights (CIAAW) 2 of 2 items Isotope Atomic Mass (uncertainty) [u] Abundance (uncertainty) Isotope 10 B Atomic Mass (uncertainty) [u] 10.01293695(41) Abundance (uncertainty) 0.199(7) Isotope 11 B Atomic Mass (uncertainty) [u] 11.00930536(45) Abundance (uncertainty) 0.801(7) NIST Physical Measurement Laboratory 8.5 Atomic Mass, Half Life, and Decay Nuclide Atomic Mass and Uncertainty [u] Half Life and Uncertainty Discovery Year Decay Modes, Intensities and Uncertainties [%] Nuclide 6 B Atomic Mass and Uncertainty [u] 6.050800 ± 0.00215 [Estimated] Half Life and Uncertainty p-Unstable Discovery Year Decay Modes, Intensities and Uncertainties [%] 2p ? Nuclide 7 B Atomic Mass and Uncertainty [u] 7.029712000 ± 0.000027 Half Life and Uncertainty 570 ys ± 14 Discovery Year 1967 Decay Modes, Intensities and Uncertainties [%] p=100% Nuclide 8 B Atomic Mass and Uncertainty [u] 8.024607315 ± 0.000001073 Half Life and Uncertainty 771.9 ms ± 0.9 Discovery Year 1950 Decay Modes, Intensities and Uncertainties [%] β+=100%; β+α=100% Nuclide 9 B Atomic Mass and Uncertainty [u] 9.013329645 ± 0.000000969 Half Life and Uncertainty 800 zs ± 300 Discovery Year 1940 Decay Modes, Intensities and Uncertainties [%] p=100% Nuclide 10 B Atomic Mass and Uncertainty [u] 10.012936862 ± 0.000000016 Half Life and Uncertainty Stable Discovery Year 1920 Decay Modes, Intensities and Uncertainties [%] IS=19.65±4.4% Nuclide 11 B Atomic Mass and Uncertainty [u] 11.009305166 ± 0.000000013 Half Life and Uncertainty Stable Discovery Year 1920 Decay Modes, Intensities and Uncertainties [%] IS=80.35±4.4% Nuclide 12 B Atomic Mass and Uncertainty [u] 12.014352638 ± 0.000001418 Half Life and Uncertainty 20.20 ms ± 0.02 Discovery Year 1935 Decay Modes, Intensities and Uncertainties [%] β-=100%; β-α=0.60±0.2% Nuclide 13 B Atomic Mass and Uncertainty [u] 13.017779981 ± 0.000001073 Half Life and Uncertainty 17.16 ms ± 0.18 Discovery Year 1956 Decay Modes, Intensities and Uncertainties [%] β-=100%; β-n=0.266±3.6% Nuclide 14 B Atomic Mass and Uncertainty [u] 14.025404010 ± 0.000022773 Half Life and Uncertainty 12.36 ms ± 0.29 Discovery Year 1966 Decay Modes, Intensities and Uncertainties [%] β-=100%; β-n=6.04±2.3%; β-2n ? Nuclide 15 B Atomic Mass and Uncertainty [u] 15.031087023 ± 0.000022575 Half Life and Uncertainty 10.18 ms ± 0.35 Discovery Year 1966 Decay Modes, Intensities and Uncertainties [%] β-=100%; β-n=98.7±1%; β-2n<1.5% Nuclide 16 B Atomic Mass and Uncertainty [u] 16.039841045 ± 0.000026373 Half Life and Uncertainty 4.6 zs Discovery Year 2000 Decay Modes, Intensities and Uncertainties [%] n ? Nuclide 17 B Atomic Mass and Uncertainty [u] 17.046931399 ± 0.000219114 Half Life and Uncertainty 5.08 ms ± 0.05 Discovery Year 1973 Decay Modes, Intensities and Uncertainties [%] β-=100%; β-n=63±0.1%; β-2n=12±0.2%; β-3n=3.5±0.7%; β-4n=0.4±0.3% Nuclide 18 B Atomic Mass and Uncertainty [u] 18.055601683 ± 0.00021918 Half Life and Uncertainty Not-specified <26 ns Discovery Year 2010 Decay Modes, Intensities and Uncertainties [%] n=100% Nuclide 19 B Atomic Mass and Uncertainty [u] 19.064166000 ± 0.000564 Half Life and Uncertainty 2.92 ms ± 0.13 Discovery Year 1984 Decay Modes, Intensities and Uncertainties [%] β-=100%; β-n=71±0.9%; β-2n=17±0.5%; β-3n<9.1% Nuclide 20 B Atomic Mass and Uncertainty [u] 20.074505644 ± 0.000586538 Half Life and Uncertainty 912.4 ys Discovery Year 2018 Decay Modes, Intensities and Uncertainties [%] n=100%; β-n ?; β-2n ? Nuclide 21 B Atomic Mass and Uncertainty [u] 21.084147485 ± 0.00059975 Half Life and Uncertainty 760 ys Discovery Year 2018 Decay Modes, Intensities and Uncertainties [%] 2n=100% Atomic Mass Data Center (AMDC), International Atomic Energy Agency (IAEA) 9 Information Sources Filter by Source PubChem Atomic Mass Data Center (AMDC), International Atomic Energy Agency (IAEA)LICENSE B IUPAC Commission on Isotopic Abundances and Atomic Weights (CIAAW)LICENSE Boron Jefferson Lab, U.S. Department of EnergyLICENSE Please see citation and linking information: Boron Los Alamos National Laboratory, U.S. Department of EnergyLICENSE Boron NIST Physical Measurement LaboratoryLICENSE Boron IUPAC Periodic Table of the Elements and Isotopes (IPTEI)LICENSE Copyright (c) 2020 International Union of Pure and Applied Chemistry. The International Union of Pure and Applied Chemistry (IUPAC) contribution within Pubchem is provided under a CC-BY-NC-ND 4.0 license, unless otherwise stated. PubChem ElementsLICENSE Boron Cite Download CONTENTS Title and Summary 1 Identifiers Expand this menu 2 Properties Expand this menu 3 History Expand this menu 4 Uses 5 Sources 6 Compounds Expand this menu 7 Handling and Storage 8 Isotopes Expand this menu 9 Information Sources Connect with NLM Twitter Facebook YouTube National Library of Medicine 8600 Rockville Pike, Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov
188882
https://www.zhihu.com/question/561793456?write
位移和路程有啥区别? - 知乎 关注推荐热榜专栏圈子 New付费咨询知学堂 ​ 直答 切换模式 登录/注册 位移和路程有啥区别? 关注问题​写回答 登录/注册 力学 高中物理 经典力学 高一物理 机械力学 位移和路程有啥区别? 关注者 8 被浏览 3,354 关注问题​写回答 ​邀请回答 ​好问题 ​1 条评论 ​分享 ​ 9 个回答 默认排序 查看剩余 6 条回答 ​写回答 下载知乎客户端 与世界分享知识、经验和见解 相关问题 位移和路程有何区别? 18 个回答 位移和路程的关系是什么? 9 个回答 位移和路程更深入的区别是什么? 2 个回答 位移和路程怎么解? 2 个回答 位移和路程不可能相等吗? 2 个回答 帮助中心 知乎隐私保护指引申请开通机构号联系我们 举报中心 涉未成年举报网络谣言举报涉企侵权举报更多 关于知乎 下载知乎知乎招聘知乎指南知乎协议更多 京 ICP 证 110745 号 · 京 ICP 备 13052560 号 - 1 · 京公网安备 11010802020088 号 · 互联网新闻信息服务许可证:11220250001 · 京网文2674-081 号 · 药品医疗器械网络信息服务备案(京)网药械信息备字(2022)第00334号 · 广播电视节目制作经营许可证:(京)字第06591号 · 互联网宗教信息服务许可证:京(2022)0000078 · 服务热线:400-919-0001 · Investor Relations · © 2025 知乎 北京智者天下科技有限公司版权所有 · 违法和不良信息举报:010-82716601 · 举报邮箱:jubao@zhihu.com 想来知乎工作?请发送邮件到 jobs@zhihu.com 登录知乎,问答干货一键收藏 打开知乎App 在「我的页」右上角打开扫一扫 其他扫码方式:微信 下载知乎App 无障碍模式 验证码登录 密码登录 开通机构号 中国 +86 获取短信验证码 获取语音验证码 登录/注册 其他方式登录 未注册手机验证后自动登录,注册即代表同意《知乎协议》《隐私保护指引》 扫码下载知乎 App 关闭二维码
188883
https://www.scribd.com/document/557890799/Chapter-Iso-Quant-Curves
Chapter Iso Quant Curves | PDF | Labour Economics | Production And Manufacturing Opens in a new window Opens an external website Opens an external website in a new window This website utilizes technologies such as cookies to enable essential site functionality, as well as for analytics, personalization, and targeted advertising. To learn more, view the following link: Privacy Policy Open navigation menu Close suggestions Search Search en Change Language Upload Sign in Sign in Download free for 30 days 0 ratings 0% found this document useful (0 votes) 156 views 9 pages Chapter Iso Quant Curves An isoquant is a curve that shows all the combinations of inputs that yield the same level of output. It represents a firm's production possibilities and is analogous to a consumer's indiffe… Full description Uploaded by Sajid Alvi AI-enhanced description Go to previous items Go to next items Download Save Save Chapter Iso Quant Curves For Later Share 0%0% found this document useful, undefined 0%, undefined Print Embed Ask AI Report Download Save Chapter Iso Quant Curves For Later You are on page 1/ 9 Search Fullscreen ISOQUANT Meaning An isoquant is a firm‟s counterpart of the consumer‟s indifference curve. An isoquant is a curve that shows all the combinations of inputs that yield the same level of output. „Iso‟ means equal and „quant‟ means quantity. Therefore, an isoquant represents a constant quantity of output. The isoquant curve is also known as an “Equal Product Curve” or “Production Indifference Curve” or Iso- Product Curve.” The concept of isoquants can be easily explained with the help of the table given below: Table 1: An Isoquant Schedule Combinations of Labor and Capital Units of Labor (L) Units of Capital (K) Output of Cloth (meters) A 5 9 100 B 10 6 100 C 15 4 100 D 20 3 100 The above table is based on the assumption that only two factors of production, namely, Labor and Capital are used for producing 100 meters of cloth. Combination A = 5L + 9K = 100 meters of cloth Combination B = 10L + 6K = 100 meters of cloth Combination C = 15L + 4K = 100 meters of cloth Combination D = 20L + 3K = 100 meters of cloth The combinations A, B, C and D show the possibility of producing 100 meters of cloth by applying various combinations of labor and capital. Thus, an isoquant schedule is a schedule of different combinations of factors of production yielding the same qu antity of output. An iso-product curve is the graphic representation of an iso-product schedule. adDownload to read ad-free Thus, an isoquant is a curve showing all combinations of labor and capital that can be used to produce a given quantity of output. I so q uant M a p An isoquant map is a set of isoquants that shows the maximum attainable output from any given combination inputs. I so q ua nt s V s I nd i ff e r e nce C ur ve s An isoquant is „analogous‟ to an indifference curve in more than one way. The properties of isoquants are similar to the properties of indifference curves. However, some of the differences may also be noted. Firstly, in the indifference curve technique, utility cannot be measured. In the case of an isoquant, the product can be precisely measured in physical units. Secondly, in the case of indifference curves, we can talk only about higher or lower levels of utility. In the case of isoquants, we can say by how much IQ 2 actually exceeds IQ 1 (figure 2). adDownload to read ad-free Properties of Isoquants An isoquant lying above and to the right of another isoquant represents a higher level of output. This is because of the fact that on the higher isoquant, we have either more units of one factor of production or more units of both the factors. This has been illustrated in figure 3. In figure 3, points A and B lie on the isoquant IQ 1 and IQ 2 respectively. At point A we have = OX 1 units of Labor and OY 1 units of capital. At point B we have = OX 2 units of Labor and OY 1 units of capital. Though the amount of capital (OY 1 ) is the same at both the points, point B is having X 1 X 2 units of labor more. Therefore, it will yield a higher ou tput. Hence, it is proved that a higher isoquant shows a higher level of output. Tw o i so q ua nt s ca nn o t cu t e a ch o t he r Just as two indifference curves cannot cut each other, two isoquants also cannot cut each other. If they intersect each other, there would be a contradiction and we will get inconsistent results. This can be illustrated with the help of a diagram as in figure 4. adDownload to read ad-free In figure 4, the isoquant IQ 1 shows 100 units of output produced by various combinations of labor and capital and the curve IQ 2 shows 200 units of output, On IQ 1 , we have A = C, because they are on the same isoquant. On IQ 2 , we have A = B Therefore B = C This is however inconsistent since C = 100 and B = 200. Therefore, isoquants c annot intersect. I so q ua nt s a r e co nv e x t o t he o r ig in An isoquant must always be convex to the origin. This is because of the operation of the principle of diminishing marginal rate of technical substitution. MRTS is the rate at which marginal unit of an input can be substituted for another input making the level of output remain the same. In figure 5, as the producer moves from point A to B, from B to C and C to D along an isoquant, the marginal rate of technical substitution (MRTS) of labor for capital diminishes. The MRTS diminishes because the two factors are not perfect substitutes. In figure 5, for every increase in labor units by (ΔL) there is a corresponding decr ease in the units of capital (ΔK). It cannot be concave as shown in figure 6. If they are concave, MRTS of labor for capital increases. But this is not true of isoquants. adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free Share this document Share on Facebook, opens a new window Share on LinkedIn, opens a new window Share with Email, opens mail client Copy link Millions of documents at your fingertips, ad-free Subscribe with a free trial You might also like Production Functions Explained No ratings yet Production Functions Explained 23 pages Isoquant, Isocost Line, Expansion Path, Ridge Lines, Returns To Scale 100% (3) Isoquant, Isocost Line, Expansion Path, Ridge Lines, Returns To Scale 27 pages English Notes For Class 12 Sindh Board Chapter 1 Twenty Minutes With Mrs Oakentubb No ratings yet English Notes For Class 12 Sindh Board Chapter 1 Twenty Minutes With Mrs Oakentubb 14 pages Assignment: Iso-Quant Curve No ratings yet Assignment: Iso-Quant Curve 3 pages Isoquants Properties No ratings yet Isoquants Properties 3 pages Isoquants, Isocost Line and Producer's Equilibrium - 083943 100% (1) Isoquants, Isocost Line and Producer's Equilibrium - 083943 10 pages Isoquant Curves for Production Analysis No ratings yet Isoquant Curves for Production Analysis 2 pages Production Function With Two Variable Inputs - Iso - Quant No ratings yet Production Function With Two Variable Inputs - Iso - Quant 13 pages JUST ANOTHER GOLD SCALPING Strategy by RAKEEL 100% (4) JUST ANOTHER GOLD SCALPING Strategy by RAKEEL 6 pages Isoquant No ratings yet Isoquant 3 pages Isoquants: Isoquant Curve & Isocost Curve 1 No ratings yet Isoquants: Isoquant Curve & Isocost Curve 1 19 pages Isoquant Curve in Economics Explained - Properties and Formula No ratings yet Isoquant Curve in Economics Explained - Properties and Formula 7 pages Isoquants (Equal Product Curves) No ratings yet Isoquants (Equal Product Curves) 39 pages Business Economics: Session: Unit: 3 No ratings yet Business Economics: Session: Unit: 3 19 pages Isoquant Curve: & Its Characteristics No ratings yet Isoquant Curve: & Its Characteristics 13 pages Sem 3 Managerial Economics Module 4 IsoQuant No ratings yet Sem 3 Managerial Economics Module 4 IsoQuant 7 pages Isoquants 100% (2) Isoquants 15 pages Understanding Isoquant Curves No ratings yet Understanding Isoquant Curves 22 pages Asgnmt5 (522) Sir Yasir Arafat No ratings yet Asgnmt5 (522) Sir Yasir Arafat 22 pages Isoquant and Returns to Scale No ratings yet Isoquant and Returns to Scale 45 pages 3 Iso Quants No ratings yet 3 Iso Quants 37 pages The State and International Relations - Hobson - (2000) PDF 100% (1) The State and International Relations - Hobson - (2000) PDF 267 pages CONCEPT OF ISOQuant No ratings yet CONCEPT OF ISOQuant 3 pages Long - Run-ISO-Quants of PPTS No ratings yet Long - Run-ISO-Quants of PPTS 16 pages 4 LR Production Function No ratings yet 4 LR Production Function 49 pages Unit 7 No ratings yet Unit 7 10 pages Isoquants No ratings yet Isoquants 22 pages 9 Isoquants No ratings yet 9 Isoquants 3 pages 3.2 Isoquants No ratings yet 3.2 Isoquants 5 pages Long-Run Production Function (With Diagram) : Isoquant Curve 100% (1) Long-Run Production Function (With Diagram) : Isoquant Curve 20 pages Trade and Market in Byzantium 100% (1) Trade and Market in Byzantium 87 pages Production and Cost Concepts No ratings yet Production and Cost Concepts 19 pages Long Run Production Function No ratings yet Long Run Production Function 45 pages Production 1 No ratings yet Production 1 13 pages Isoquants 2 No ratings yet Isoquants 2 12 pages Costs No ratings yet Costs 29 pages Ieft No ratings yet Ieft 4 pages Isoquant No ratings yet Isoquant 2 pages The Production Process No ratings yet The Production Process 84 pages Isoquants & Firm'S Expansion Path No ratings yet Isoquants & Firm'S Expansion Path 46 pages Isoquant No ratings yet Isoquant 21 pages Prod 2 No ratings yet Prod 2 37 pages Factor-Factor Relationship No ratings yet Factor-Factor Relationship 9 pages Production Function - IsO-Quants & ISO - Cost& Expansion Path of A Firm No ratings yet Production Function - IsO-Quants & ISO - Cost& Expansion Path of A Firm 17 pages Isoquant Is Convex To The Origin No ratings yet Isoquant Is Convex To The Origin 4 pages Cost of Production: The Isoquant-Isocost Approach/ Least Cost No ratings yet Cost of Production: The Isoquant-Isocost Approach/ Least Cost 33 pages UNIT-3 Theory of Production and Cost Analysis Production Function 100% (1) UNIT-3 Theory of Production and Cost Analysis Production Function 26 pages Isoquants Mefa No ratings yet Isoquants Mefa 4 pages Lecture 13 Unit 3 No ratings yet Lecture 13 Unit 3 8 pages English Notes For Class 12 Sindh Board Chapter 3 The Day The Dam Broke No ratings yet English Notes For Class 12 Sindh Board Chapter 3 The Day The Dam Broke 10 pages S1 Module 3.2 No ratings yet S1 Module 3.2 20 pages Firm Theory (Production) No ratings yet Firm Theory (Production) 22 pages Isoquants and Isocosts No ratings yet Isoquants and Isocosts 28 pages Long Run Production Function: By: Gaurav Shreekant No ratings yet Long Run Production Function: By: Gaurav Shreekant 18 pages Iso Quant No ratings yet Iso Quant 16 pages Isoquant No ratings yet Isoquant 12 pages English Notes For Class 12 Sindh Board Chapter 2 No ratings yet English Notes For Class 12 Sindh Board Chapter 2 15 pages ISOQUantT .Economics Production Function dr.K.Baranidharan - Sri Sairam Institute of Technology, Chennai No ratings yet ISOQUantT .Economics Production Function dr.K.Baranidharan - Sri Sairam Institute of Technology, Chennai 27 pages Group Members: Pankaj Singh Aatif Ahmed Mujahid Sayed Wahid Khan Samina Mirkar Ramakrishnan Iyer No ratings yet Group Members: Pankaj Singh Aatif Ahmed Mujahid Sayed Wahid Khan Samina Mirkar Ramakrishnan Iyer 27 pages Daftar Harga Satuan Upah Bangunan TAHUN 2019: Provinsi: Jambi Kab/Kota No ratings yet Daftar Harga Satuan Upah Bangunan TAHUN 2019: Provinsi: Jambi Kab/Kota 151 pages Isoquantsanditsproperties No ratings yet Isoquantsanditsproperties 26 pages Isoquant No ratings yet Isoquant 3 pages Micro Economics No ratings yet Micro Economics 25 pages Isocost: The Cost-Minimization Problem No ratings yet Isocost: The Cost-Minimization Problem 3 pages Oil and Gas Industry No ratings yet Oil and Gas Industry 50 pages Class 9 Chapter 3 Office Automation Question Answers No ratings yet Class 9 Chapter 3 Office Automation Question Answers 18 pages Contract Manufacturing: Flextronics vs. Solectron No ratings yet Contract Manufacturing: Flextronics vs. Solectron 10 pages Tourism's Economic Impact No ratings yet Tourism's Economic Impact 23 pages Seylan Bank Capabilities No ratings yet Seylan Bank Capabilities 4 pages The Land of Immigrants No ratings yet The Land of Immigrants 5 pages List of Companies of Belgium: Largest Firms Notable Firms See Also References External Links No ratings yet List of Companies of Belgium: Largest Firms Notable Firms See Also References External Links 8 pages IREDAEQ 15042025171825 Signedoutcomeboardmeetingfinancial15042025 No ratings yet IREDAEQ 15042025171825 Signedoutcomeboardmeetingfinancial15042025 30 pages Accenture Platform Economy Technology Vision 2016 France 100% (1) Accenture Platform Economy Technology Vision 2016 France 15 pages Rice Mill Boq No ratings yet Rice Mill Boq 2 pages MEM Annual Report 2020 English-Small - Compressed No ratings yet MEM Annual Report 2020 English-Small - Compressed 73 pages Chapter#7 Cost Theory and Analysis Solution 80% (10) Chapter#7 Cost Theory and Analysis Solution 8 pages Charting A Course: A Guide To Retirement/Investing Planning No ratings yet Charting A Course: A Guide To Retirement/Investing Planning 14 pages Challan 900 (08-03-2018) No ratings yet Challan 900 (08-03-2018) 1 page OUTPUT HANDLING in C++ - Text Book Exercise and Question Answers - Computer Science For Class XSSC Part 1 and 2 No ratings yet OUTPUT HANDLING in C++ - Text Book Exercise and Question Answers - Computer Science For Class XSSC Part 1 and 2 17 pages Chapter 29 Government and The Budget No ratings yet Chapter 29 Government and The Budget 14 pages Orioncoin Whitepaper No ratings yet Orioncoin Whitepaper 32 pages Class 9 Chapter 3 Office Automation Mcqs No ratings yet Class 9 Chapter 3 Office Automation Mcqs 8 pages Interpreting Company Accounts No ratings yet Interpreting Company Accounts 10 pages Types of Iron Ore No ratings yet Types of Iron Ore 8 pages Swiss Re Weekly Report No ratings yet Swiss Re Weekly Report 8 pages Estimated Taxi Fare From LCCT Apr 2010 No ratings yet Estimated Taxi Fare From LCCT Apr 2010 32 pages Alabama Department of Examiners of Public Accounts Report of Indian Ford Fire District No ratings yet Alabama Department of Examiners of Public Accounts Report of Indian Ford Fire District 23 pages AgeUK Annual Report 0405 No ratings yet AgeUK Annual Report 0405 34 pages RailTel Invoice for Srinivasan P No ratings yet RailTel Invoice for Srinivasan P 1 page Theory Exam Evening Classes No ratings yet Theory Exam Evening Classes 7 pages Case Study 10 No ratings yet Case Study 10 2 pages Sample Resume For Cluster Manager No ratings yet Sample Resume For Cluster Manager 2 pages Invoice - 219971 Kamala Khati Kamala Khati 10 32 41 No ratings yet Invoice - 219971 Kamala Khati Kamala Khati 10 32 41 1 page Certified International Procurement Professional - India No ratings yet Certified International Procurement Professional - India 4 pages ad Footer menu Back to top About About Scribd, Inc. Everand: Ebooks & Audiobooks Slideshare Join our team! Contact us Support Help / FAQ Accessibility Purchase help AdChoices Legal Terms Privacy Copyright Cookie Preferences Do not sell or share my personal information Social Instagram Instagram Facebook Facebook Pinterest Pinterest Get our free apps About About Scribd, Inc. Everand: Ebooks & Audiobooks Slideshare Join our team! Contact us Legal Terms Privacy Copyright Cookie Preferences Do not sell or share my personal information Support Help / FAQ Accessibility Purchase help AdChoices Social Instagram Instagram Facebook Facebook Pinterest Pinterest Get our free apps Documents Language: English Copyright © 2025 Scribd Inc. We take content rights seriously. Learn more in our FAQs or report infringement here. We take content rights seriously. Learn more in our FAQs or report infringement here. Language: English Copyright © 2025 Scribd Inc. 576648e32a3d8b82ca71961b7a986505
188884
https://www.ncbi.nlm.nih.gov/books/NBK572133/
An official website of the United States government The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. Log in Account Logged in as:username Dashboard Publications Account settings Log out Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation Browse Titles Advanced Help Disclaimer NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health. StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2025 Jan-. StatPearls [Internet]. Show details Treasure Island (FL): StatPearls Publishing; 2025 Jan-. Atelectasis (Nursing) Kelly Grott; Shaylika Chauhan; Devang K. Sanghavi; Julie D. Dunlap; Shelley Lee. Author Information and Affiliations Authors Kelly Grott1; Shaylika Chauhan2; Devang K. Sanghavi3; Julie D. Dunlap; Shelley Lee4. Affiliations 1 Indiana University 2 Geisinger Health System 3 Mayo Clinic 4 Tennessee Valley Hospital Last Update: February 26, 2024. Learning Outcome Define atelectasis Identify risk factors for atelectasis Compare and contrast obstructive versus non-obstructive atelectasis Describe expected assessment and diagnostic findings in a patient experiencing atelectasis Select appropriate nursing interventions to support a patient experiencing atelectasis. Describe medical interventions that may be used for atelectasis Identify key patient education interventions Introduction The word "atelectasis" is Greek in origin; It is a combination of the Greek words atelez (ateles) and ektasiz (ektasis)meaning "imperfect" and "expansion". It results from the partial or complete, reversible collapse of the small airways leading to an impaired exchange of CO2 and O2. The incidence of atelectasis in patients undergoing general anesthesia is 90%. Nursing Diagnosis Impaired Gas Exchange and appropriate NANDA nursing diagnosis for atelectasis. Causes Atelectasis is one of three types: compressive, due to lung tissue compression, resorptive, caused by absorption of alveolar air, or related to an impairment of pulmonary surfactant production or function.] It is categorized as either obstructive, non-obstructive, postoperative, and rounded atelectasis. There are four types of nonobstructive atelectasis: compression, adhesive, cicatrization, relaxation, and replacement atelectasis. Compression atelectasis happens when there is increased pressure exerted on the lung which causes a transmural pressure difference between the extra and intra-alveolar space that results in alveolar collapse. During anesthesia, diaphragmatic relaxation occurs inhibiting the natural lowering of the diaphragm that occurs during spontaneous breathing. Lying in the supine position further displaces the diaphragm toward the head resulting in additional inhibition of gas exchange due to the further impairment of the transmural pressure gradient resulting in an increased risk of atelectasis. Adhesive atelectasis occurs due to either a surfactant deficiency or dysfunction. This type is commonly seen in patients with Adult Respiratory Distress Syndrome (ARDS) or among premature infants with Respiratory Distress Syndrome (RDS). Surfactant prevents alveolar collapse by decreasing alveolar surface tension. Alterations to surfactant production and function tend to increase the surface tension within the alveoli lead creating instability resulting in collapse. Cicatrization atelectasis results in the contraction of the lung tissue to the development of parenchymal scar tissue. Cicatrization atelectasis is often seen in tuberculosis, fibrosis, and other chronic destructive lung diseases. Relaxation atelectasis occurs when there is a loss of contact between the parietal and visceral tissue. This happens with pneumothoraces and pleural effusions. Replacement atelectasis occurs when a tumor replaces the alveoli of an entire lobe of the lung, seen with bronchioalveolar carcinoma, resulting in the complete collapse of the lung. In obstructive atelectasis, the air in the alveoli is absorbed distal to the point of obstruction. This is why it is referred to as resorptive atelectasis. In this instance, there is a mismatch between ventilation due to complete or partial obstruction of the alveoli and the uninterrupted perfusion of the alveoli. The presence of the ongoing ventilation-perfusion mismatch due to the obstruction results in the absorption of the gas in the alveoli precipitating the collapse. Obstructive atelectasis may be caused by intrathoracic tumors, mucous plugs, and foreign bodies. Children are at risk for resorption atelectasis due to aspiration foreign bodies because there are less developed collateral pathways for ventilation. Adults with COPD are less likely to develop resorptive atelectasis related to an obstructive lesion due to better-developed collateral ventilation secondary to airway destruction. General anesthesia contributes to the development of absorptive atelectasis through the use of high inspiratory concentrations of oxygen ( FiO2). The risk for atelectasis due to the use of high FIO2 during anesthesia is related to the rates at which oxygen and nitrogen are absorbed into the bloodstream. Oxygen is rapidly absorbed, whereas nitrogen is not. Increasing the amount of oxygen inspired decreases the amount of nitrogen available to hold the alveoli open. Postoperative atelectasis is normally seen within typically 72 hours of surgery using general anesthesia. Atelectasis is a known complication of general anesthesia. Rounded atelectasis is rare. It is most often associated with asbestosis. It occurs because the lung tissue folds to the pleura. Any of these mechanisms of atelectasis may contribute to perioperative atelectasis. Most frequently seen are the absorptive and compression variety. Middle lobe syndrome is the result of either an extraluminal or intraluminal obstruction of the bronchi which results in either a recurrent or fixed atelectasis of the lingula and right middle lobe of the lung. Inflammation, alterations in bronchial anatomy, and the presence of collateral ventilation are nonobstructive causes. Middle lobe syndrome is treated via bronchoscopy and bronchoalveolar lavage. Bronchiectasis may result from chronic atelectasis. Middle lobe syndrome has been associated with Sjorgen syndrome and treatment with steroids has shown promise. Risk Factors The incidence of atelectasis does not demonstrate gender differences. COPD, asthma, and increased age do not impact incidence either. Atelectasis is more common in those who have recently had surgery using general anesthesia. Incidence has been reported as high as 90% in this group. Research has demonstrated that the dependent portions of the lungs show indications of atelectasis within five minutes of beginning general anesthesia. Atelectasis is more common following cardiac surgery with cardio-pulmonary bypass than any other surgery, including thoracotomy; however, the risk of atelectasis is higher in patients undergoing abdominal or thoracic surgery. Obesity and pregnancy increase the risk for atelectasis due to upward displacement of the diaphragm (see the section on epidemiology). Assessment Typically, atelectasis is asymptomatic. However, a patient might also present with decreased or absent breath sounds, crackles, cough, sputum production, dyspnea, tachypnea, and/or diminished chest expansion. Evaluation Atelectasis is usually clinically diagnosed in a patient with known risk factors. If an X-ray is warranted, a chest film, chest CAT scan (CT), and/or thoracic ultrasound may be useful to diagnose atelectasis. A chest x-ray will show platelike, horizontal lines in the area of atelectasis. Atelectasis is not usually seen on convention chest X-rays until it is significant. On chest X-ray, the displacement of interlobar fissures, pulmonary opacification, and/or tracheal shift toward the affected side is seen with atelectasis. Chest CT shows densities in the dependent lung and decreased volume in the affected side. Fiberoptic bronchoscopy may allow direct visualization of atelectasis. It can be both diagnostic and therapeutic, as it may reveal the cause of an obstruction causing the atelectasis (i.e., tumor, mucous plug, or foreign body). Arterial blood gas may demonstrate arterial hypoxemia and respiratory alkalosis with normal PaCO2. PaCO2 may be lower due to increased minute ventilation often seen in atelectasis. Medical Management A transient lung dysfunction leading to atelectasis caused by general anesthesia generally resolves itself within 24 hours. Unfortunately, some patients may develop significant respiratory complications resulting in increased morbidity and mortality without treatment. Prevention of atelectasis is achieved through the avoidance of general anesthesia, early mobility, adequate pain treatment including minimization of parenteral opioid use. When general anesthesia is required, steps should be taken to prevent the development of atelectasis such as: using continuous positive airway pressure (CPAP), using the lowest possible FiO2 during anesthesia administration, use of PEEP (positive end-expiratory pressure), engaging in lung recruitment maneuvers, and using low tidal volumes. One study demonstrated engaging in intraoperative alveolar recruitment followed by PEEP effectively prevented lung atelectasis in obese patients resulting in better oxygenation, shorter recovery room time, and fewer pulmonary complications postoperatively. Sitting upright increases functional residual capacity (FRC) decreasing atelectasis. Other interventions that have been used to decrease atelectasis include deep breathing; early ambulation; proper use of an incentive spirometer or acapella device; chest physiotherapy; tracheal suctioning if intubated; and use of positive pressure ventilation. Each of these interventions temporarily increases the transmural pressure gradient resulting in the re-expansion of collapsed areas of the lung. Preoperatively patients should be taught atelectasis prevention measures, such as incentive spirometry. In the case of incentive spirometry, it is believed that this intervention should begin preoperatively and then continue postoperatively with hourly use encouraged to achieve maximal benefit. Mucolytic agents (acetylcysteine) and recombinant human DNase (dornase alpha) are pharmacologic agents which may be of some benefit in patients with cystic fibrosis, as mucous plugs are often seen in this patient population. Bronchoscopy may be used to manage atelectasis. In one study, bronchoscopy resulted in improved lung function reversing atelectasis 76% of the time. Bronchoscopy should be used when mechanical obstruction of the bronchus is suspected and coughing or suctioning has not been successful. Bronchoscopy should also be considered when early ambulation, incentive spirometry, bronchodilators, and humidity, have not been successful after 24 hours of use. Engaging in preventative strategies early and promptly recognizing atelectasis will improve patient outcomes and significantly decrease cost. Nursing Management Patients enter the hospital environment with a variety of risk factors that place them at risk for the development of atelectasis. Among these risk factors is a history of cardiovascular disease, pulmonary disease, particularly chronic obstructive pulmonary disease, neuromuscular disease, kidney failure, cancer, and autoimmune disorders. In addition, tobacco use, obesity, traumatic injury, advanced age, and recent respiratory illness are also risks. Careful assessment of the patient's respiratory status is required throughout the hospitalization. Particular attention should be given to lung sounds for diminishment and /or crackles. Complaints of dyspnea should be reported. The presence of cough should be further assessed for sputum production. Characteristics of sputum should be reported to the physician. Vital signs should be monitored for tachypnea. Fever may occur, but this is not necessarily attributable to atelectasis. Incentive spirometry has been a mainstay of nursing postoperative atelectasis prevention. Recent studies have indicated that incentive spirometry alone may not be sufficient to prevent untoward outcomes in postoperative patients. Evidence indicates that the use of deep breathing, adequate pain relief, directed cough, and early patient mobilization are also necessary to increase lung volumes . When To Seek Help Dyspnea, tachypnea, and increased work of breathing as exhibited by the use of accessory muscles during respirations should be reported to the physician. Changes in the character of lung sounds, cough, and sputum production should also be reported. Coordination of Care Preventing atelectasis is vital to improving the outcomes of the postoperative patient. Unfortunately, atelectasis may not always be prevented. Therefore, early recognition and treatment are also important, as this can decrease the length of hospitalization, cost, and improve patient outcomes. Prevention and treatment of atelectasis require an interprofessional team effort. Physicians, particularly surgeons and anesthesiologists, must consider the role of anesthesia in atelectasis. Nursing needs to monitor the patient pre and post-procedure. Pharmacists can provide regarding the use of opioids and mucolytics. The nurses administering these medications should report on the efficacy of therapy as well as any adverse events, resulting in dose or medication changes, or other interventions. Nursing should help to educate the patient and family regarding interventions to minimize the risk of atelectasis, such as incentive spirometry. incentive spirometry. In summary, atelectasis management takes a collaborative interprofessional team to optimize patient outcomes. [Level V] Health Teaching and Health Promotion Atelectasis is a partial collapse of the lung which causes shortness of breath. It may be the result of several different processes, most often associated with poor inspiratory effort, airway obstruction blocking air movement into the lung, additional pressure placed on the outside of the lung, or alterations in the production or function of a protein called surfactant in the lung. Treatment addresses underlying causes of the condition but consists most often of supportive measures, such as deep breathing, incentive spirometry, and providing supplemental O2. Discharge Planning In the past, it was believed that postoperative fever was caused by atelectasis. There is no evidence to support this belief. Review Questions Access free multiple choice questions on this topic. Comment on this article. Figure post op atelectasis chest x-ray Image courtesy S Bhimji MD Figure Post surgical atelectasis Image courtesy Dr Chaigasame Figure Chest computed tomography showing extensive emphysema in the right middle lobe and compressive atelectasis of the right upper lobe. Courtesy: Humberto C. Sasieta , Francis C. Nichols , Ronald S. Kuzo , Jennifer M. Boland , and James P. Utz . References 1. : Lundquist H, Hedenstierna G, Strandberg A, Tokics L, Brismar B. CT-assessment of dependent lung densities in man during general anaesthesia. Acta Radiol. 1995 Nov;36(6):626-32. [PubMed: 8519574] 2. : Peroni DG, Boner AL. Atelectasis: mechanisms, diagnosis and management. Paediatr Respir Rev. 2000 Sep;1(3):274-8. [PubMed: 12531090] 3. : Zeng C, Lagier D, Lee JW, Vidal Melo MF. Perioperative Pulmonary Atelectasis: Part I. Biology and Mechanisms. Anesthesiology. 2022 Jan 01;136(1):181-205. [PMC free article: PMC9869183] [PubMed: 34499087] 4. : Magnusson L, Spahn DR. New concepts of atelectasis during general anaesthesia. Br J Anaesth. 2003 Jul;91(1):61-72. [PubMed: 12821566] 5. : Gunnarsson L, Tokics L, Gustavsson H, Hedenstierna G. Influence of age on atelectasis formation and gas exchange impairment during general anaesthesia. Br J Anaesth. 1991 Apr;66(4):423-32. [PubMed: 2025468] 6. : Tokics L, Hedenstierna G, Strandberg A, Brismar B, Lundquist H. Lung collapse and gas exchange during general anesthesia: effects of spontaneous breathing, muscle paralysis, and positive end-expiratory pressure. Anesthesiology. 1987 Feb;66(2):157-67. [PubMed: 3813078] 7. : Lagier D, Zeng C, Fernandez-Bustamante A, Vidal Melo MF. Perioperative Pulmonary Atelectasis: Part II. Clinical Implications. Anesthesiology. 2022 Jan 01;136(1):206-236. [PMC free article: PMC9885487] [PubMed: 34710217] 8. : Woodring JH, Reed JC. Types and mechanisms of pulmonary atelectasis. J Thorac Imaging. 1996 Spring;11(2):92-108. [PubMed: 8820021] 9. : Hartland BL, Newell TJ, Damico N. Alveolar recruitment maneuvers under general anesthesia: a systematic review of the literature. Respir Care. 2015 Apr;60(4):609-20. [PubMed: 25425708] 10. : Talab HF, Zabani IA, Abdelrahman HS, Bukhari WL, Mamoun I, Ashour MA, Sadeq BB, El Sayed SI. Intraoperative ventilatory strategies for prevention of pulmonary atelectasis in obese patients undergoing laparoscopic bariatric surgery. Anesth Analg. 2009 Nov;109(5):1511-6. [PubMed: 19843790] 11. : Craig DB, Wahba WM, Don HF, Couture JG, Becklake MR. "Closing volume" and its relationship to gas exchange in seated and supine positions. J Appl Physiol. 1971 Nov;31(5):717-21. [PubMed: 5117187] 12. : den Hollander B, Linssen RSN, Cortjens B, van Etten-Jamaludin FS, van Woensel JBM, Bem RA., Dutch Collaborative PICU Research Network. Use of dornase alfa in the paediatric intensive care unit: current literature and a national cross-sectional survey. Eur J Hosp Pharm. 2022 May;29(3):123-128. [PMC free article: PMC9047925] [PubMed: 33122405] 13. : Restrepo RD, Braverman J. Current challenges in the recognition, prevention and treatment of perioperative pulmonary atelectasis. Expert Rev Respir Med. 2015 Feb;9(1):97-107. [PubMed: 25541220] 14. : Mavros MN, Velmahos GC, Falagas ME. Atelectasis as a cause of postoperative fever: where is the clinical evidence? Chest. 2011 Aug;140(2):418-424. [PubMed: 21527508] : Disclosure: Kelly Grott declares no relevant financial relationships with ineligible companies. : Disclosure: Shaylika Chauhan declares no relevant financial relationships with ineligible companies. : Disclosure: Devang Sanghavi declares no relevant financial relationships with ineligible companies. : Disclosure: Julie Dunlap declares no relevant financial relationships with ineligible companies. : Disclosure: Shelley Lee declares no relevant financial relationships with ineligible companies. Copyright © 2025, StatPearls Publishing LLC. This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( ), which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal. Bookshelf ID: NBK572133PMID: 34283499 Share Views PubReader Print View Cite this Page Grott K, Chauhan S, Sanghavi DK, et al. Atelectasis (Nursing) [Updated 2024 Feb 26]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2025 Jan-. In this Page Learning Outcome Introduction Nursing Diagnosis Causes Risk Factors Assessment Evaluation Medical Management Nursing Management When To Seek Help Coordination of Care Health Teaching and Health Promotion Discharge Planning Review Questions References Related information PMC PubMed Central citations PubMed Links to PubMed Similar articles in PubMed Atelectasis.[StatPearls. 2025] Atelectasis. Grott K, Chauhan S, Sanghavi DK, Dunlap JD. StatPearls. 2025 Jan Development of atelectasis and arterial to end-tidal PCO2-difference in a porcine model of pneumoperitoneum.[Br J Anaesth. 2009] Development of atelectasis and arterial to end-tidal PCO2-difference in a porcine model of pneumoperitoneum. Strang CM, Hachenberg T, Fredén F, Hedenstierna G. Br J Anaesth. 2009 Aug; 103(2):298-303. Epub 2009 May 13. Kinetics of absorption atelectasis during anesthesia: a mathematical model.[J Appl Physiol (1985). 1999] Kinetics of absorption atelectasis during anesthesia: a mathematical model. Joyce CJ, Williams AB. J Appl Physiol (1985). 1999 Apr; 86(4):1116-25. Review Airway closure, atelectasis and gas exchange during anaesthesia.[Minerva Anestesiol. 2002] Review Airway closure, atelectasis and gas exchange during anaesthesia. Hedenstierna G. Minerva Anestesiol. 2002 May; 68(5):332-6. Review Respiratory function during anesthesia: effects on gas exchange.[Compr Physiol. 2012] Review Respiratory function during anesthesia: effects on gas exchange. Hedenstierna G, Rothen HU. Compr Physiol. 2012 Jan; 2(1):69-96. See reviews...See all... Recent Activity Clear)Turn Off)Turn On) Atelectasis (Nursing) - StatPearls Atelectasis (Nursing) - StatPearls Your browsing activity is empty. Activity recording is turned off. Turn recording back on) See more... Follow NCBI Connect with NLM National Library of Medicine8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers
188885
https://mathoverflow.net/questions/408601/iterated-logarithms-in-analytic-number-theory
Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Iterated logarithms in analytic number theory Ask Question Asked Modified 3 years, 10 months ago Viewed 8k times $\begingroup$ As all analytic number theorists know, iterated logarithms ($\log x$, $\log \log x$, $\log \log \log x$, etc.) are prevalent in analytic number theory. One can give countless examples of this phenomenon. My question is, can someone give an intuitive account for why this is so? Specifics regarding any of the famous theorems involving iterated logarithms are welcome. Many thanks! EDIT: Thank you so much for the answers so far! I'm still trying to get a better intuition on how a $\log \log \log$ or $\log \log \log \log$ arises, especially in Littlewood's 1914 proof that $\pi(x)-\operatorname{li}(x) = \Omega_{\pm} \left(\frac{\sqrt{x}\log \log \log x}{\log x}\right) \ (x \to \infty)$ or Montgomery's conjecture that $\limsup_{x \to \infty}\dfrac{\lvert\pi(x)-\operatorname{li}(x)\rvert}{\;\frac{\sqrt{x}\, (\log \log \log x)^2}{\log x}\;}$ is finite and postive. I admit to knowing nothing (yet) about sieve theory, so I will have to dive into the proof of the prime gap theorem by Tao, Maynard, et. al. Can someone give a more precise account of how the $\log \log \log$ or $\log \log \log\log$ arises in the proof? I'm very familiar with why occurrences of $\log \log$ happen, but once you get to $\log \log \log$, I'm still a bit mystified. Also, is there a good introduction to sieve theory where I could start, or should I just dive right in to the papers on large prime gaps? FURTHER EDIT: Can someone also explain intuitively the reason for the $\log \log \log$ in Littlewood's theorem? Historically, was this the first occurrence of a triple log in number theory? nt.number-theory analytic-number-theory Share Improve this question edited Nov 18, 2021 at 1:39 Jesse ElliottJesse Elliott asked Nov 15, 2021 at 20:30 Jesse ElliottJesse Elliott 5,4932828 silver badges5050 bronze badges $\endgroup$ 9 1 $\begingroup$ My guess is that apart from the PNT, a "metareason" for this would be the validity of probabilistic Ansätze (esp. en.wikipedia.org/wiki/Cram%C3%A9r%27s_conjecture). In such settings double logarithms are common, as in e.g. en.wikipedia.org/wiki/Law_of_the_iterated_logarithm and en.wikipedia.org/wiki/…. $\endgroup$ Steve Huntsman – Steve Huntsman 2021-11-15 20:58:13 +00:00 Commented Nov 15, 2021 at 20:58 $\begingroup$ I never understood how that law applies, since it works only outside a set of measure zero. But in some cases in number theory, as in the study of prime gaps, there is a $\log \log \log \log$ that occurs. I don't think the law of the iterated logarithm is enough to explain this. And nothing I've seen in analytic number theory really uses the law. If you use the law in trying to study the Mertens function, for example, you probably get the wrong order of growth. (There are competing conjectures, and the one with a double log is less likely to be true than the one with the triple log.) $\endgroup$ Jesse Elliott – Jesse Elliott 2021-11-15 21:26:43 +00:00 Commented Nov 15, 2021 at 21:26 3 $\begingroup$ @Jesse Elliott You may find interesting Chapter 1 of Hall and Tenenbaum's "Divisors." $\endgroup$ Tanmay Khale – Tanmay Khale 2021-11-16 01:18:30 +00:00 Commented Nov 16, 2021 at 1:18 1 $\begingroup$ Regarding the final question in your edit: I would recommend the latter half of Dimitris Koukoulopoulos' "The Distribution of Prime Numbers" as a very accessible introduction to sieve theory. (It is probably my favorite mathematical text.) In particular, the last section (Part 6) covers the results on long and short gaps that you are interested in. $\endgroup$ Tanmay Khale – Tanmay Khale 2021-11-17 01:09:36 +00:00 Commented Nov 17, 2021 at 1:09 $\begingroup$ (The most comprehensive account of sieve methods is Friedlander and Iwaniec's "Opera de Cribro." But, being written in 2010, it does not include the state-of-the art work on small and large gaps. These notes by Kevin Ford are also nice faculty.math.illinois.edu/~ford/sieve2020.pdf, and do cover small and large gaps.) $\endgroup$ Tanmay Khale – Tanmay Khale 2021-11-17 01:19:01 +00:00 Commented Nov 17, 2021 at 1:19 | Show 4 more comments 3 Answers 3 Reset to default 42 $\begingroup$ There are two main sources of repeated logs. (These sources can be further refined into natural subcategories, but I'll only mention a couple of those subcategories.) Those two main sources are: Type 1: Repeated logs occur because that is just the truth of the matter. One of my favorite examples is a 2008 theorem of Kevin Ford, solving the multiplication table problem. The theorem states that $$ |{a\cdot b\, : a,b\in {1,2,\ldots,N}}|\asymp \frac{N^2}{\log(N)^c(\log\log(N))^{3/2}}, $$ where $c=1-\frac{1+\log\log(2)}{\log(2)}$. Lest you believe that the $(\log\log(N))^{3/2}$ factor is a consequence of this being a 2-dimensional problem, it also shows up in the other dimensions. See this other question for more information. In some cases it is much easier to see where these extra log's come from. For instance, when turning sums over integers into sums over primes, this often leads to an extra log coming into force, just from the nature of the problem at hand and asymptotics with primes. For instance, we have $$ \sum_{n=1}^{N}\frac{1}{n}=\log(N)+\gamma+o(1)\ \text{ while }\ \sum_{p\leq N,\ p\text{ prime}}\frac{1}{p}=\log\log(N)+B+o(1), $$ where $\gamma$ and $B$ are well-known constants. These two asymptotics can be thought of as discrete version of the integral equalities $$ \int\frac{1}{x}\, dx = \log(x) \ \text{ while }\ \int\frac{1}{x\log(x)}\, dx=\log\log(x). $$ Since primes occur all over number theory, and they also come weighted with an extra log factor, this often contributes extra double-log factors. Type 2: Repeated logs occur as an artifact of our current best machinery. For example, Rankin showed in 1938 that the largest prime gap below $N$, for $N\gg 0$, is at least $$ \frac{1}{3}\frac{\log(N)\log\log(N)\log\log\log\log(N)}{(\log\log\log(N))^2}. $$ These extra logs happen when optimizing inequalities, and when using the known machinery of the day. But they do not represent a fundamental truth about the problem. The constant $\frac{1}{3}$ has been slowly improved. Recently, in 2014, Ford, Green, Konyagin, and Tao improved this bound, and Maynard also did so independently, by replacing the fraction $\frac{1}{3}$ by an arbitrary number. See this preprint and this other preprint for more details. Later, these five mathematicians together removed the square from the denominator. If you read the proofs, the logs are coming from the current state of the art sieve methods, together with bounding techniques. When you solve for the best fit functions to undo some of the exponentiation that occurs in calculations, the logs just fall out. In these types of problems, it is not inconceivable (and actually occurs quite regularly) that one new idea is applied to the problem, and the asymptotic changes (sometimes involving more multi-log factors, to account for the small additional room for improvement that was gained). What is surprising about Rankin's bound is that even though it is far from the predicted asymptotic, each extra idea only changed the constant out front---at least until recently. Edited to add: Working through a well-written proof will, of course, give a deeper understanding of where iterated logs arise in the problem at hand. That is certainly true for the three examples above. However, if you are not yet familiar with sieve theory, or the circle method, I wouldn't recommend working through the big proofs of those theorems mentioned above and in your question (at least, not initially). Rather, I would recommend starting with an introductory text on sieves, such as Cojocaru and Murty's book "An Introduction to Sieve Methods and their Applications". Double-logs occur almost at the very beginning. Triple-logs show up in the exercises in Chapter 5 (and perhaps earlier). Indeed, problem 25 is a typical example of how a triple log is introduced to improve an asymptotic. Share Improve this answer edited Nov 16, 2021 at 21:39 answered Nov 16, 2021 at 4:43 Pace NielsenPace Nielsen 19.3k44 gold badges8282 silver badges142142 bronze badges $\endgroup$ 7 4 $\begingroup$ $+1$, but this seems much more like a comment than an answer $\endgroup$ mathworker21 – mathworker21 2021-11-16 06:38:31 +00:00 Commented Nov 16, 2021 at 6:38 8 $\begingroup$ @mathworker21: How would it fit in the Comments section, then? =) $\endgroup$ Jose Arnaldo Bebita Dris – Jose Arnaldo Bebita Dris 2021-11-16 07:09:14 +00:00 Commented Nov 16, 2021 at 7:09 21 $\begingroup$ @mathworker21 That's a strange thing to say about the only answer that gets to the heart of the matter. $\endgroup$ Will Sawin – Will Sawin 2021-11-16 19:03:08 +00:00 Commented Nov 16, 2021 at 19:03 1 $\begingroup$ @JesseElliott I edited my post to answer your new questions. I personally found Cojocaru and Murty's book very enlightening. I really liked chapter 5, in particular. $\endgroup$ Pace Nielsen – Pace Nielsen 2021-11-16 21:48:00 +00:00 Commented Nov 16, 2021 at 21:48 3 $\begingroup$ @mathworker21 How is this not an answer to the question? $\endgroup$ Timothy Chow – Timothy Chow 2021-11-18 04:16:58 +00:00 Commented Nov 18, 2021 at 4:16 | Show 2 more comments 17 $\begingroup$ I wouldn't say that there is a single all-prevailing reason, but here are some easily discernible sources: Given a Dirichlet series $$\sum_{n=1}^{\infty}\frac{a(n)}{n^s}$$ in its abscissa of absolute convergence, we have that $$\frac{d}{ds}\sum_{n=1}^{\infty}\frac{a(n)}{n^s}= -\sum_{n=1}^{\infty}\frac{a(n)\log n}{n^s}$$ The function $\Gamma(s)$ is ubiquitous in analytic number theory (one need not look beyond the functional equation of $\zeta(s)$), and $$\frac{\Gamma'}{\Gamma}(z) = \log z -\frac{1}{2z}+ O(z^{-2}).$$ Mellin inversion, which allows us to express partial sums of arithmetic functions in terms of contour integration, is (essentially) a logarithmic change of variables away from Fourier inversion. This does not account for everything (these are largely motivated by multiplicative number theory and connections with $L$-functions), but it accounts for a lot, including the asymptotic $$\sum_{p\leq x}\frac{1}{p} = \log\log x + B + O((\log x)^{-1}).$$ This single asymptotic accounts for a lot of the presence of $\log x$ and its higher compositions in sieve theory. Combine these with ideas like partial summation, and the logs accumulate. Share Improve this answer edited Nov 16, 2021 at 8:16 answered Nov 16, 2021 at 7:00 27343640412734364041 5,14422 gold badges2323 silver badges4040 bronze badges $\endgroup$ 3 $\begingroup$ Thank you! Can you explain a little more about how the logs accumulate? For example, where does a $\log \log \log$ or $\log \log \log \log$ come from? Is there some kind of iteration involved? How is partial summation involved? $\endgroup$ Jesse Elliott – Jesse Elliott 2021-11-16 21:15:01 +00:00 Commented Nov 16, 2021 at 21:15 $\begingroup$ @JesseElliott That may follow from the processes in which logs get plugged into some other logs $\endgroup$ TravorLZH – TravorLZH 2021-11-17 00:58:04 +00:00 Commented Nov 17, 2021 at 0:58 1 $\begingroup$ @TravorLZH That's too vague and not very informative for me. That's exactly what an iterated log is. $\endgroup$ Jesse Elliott – Jesse Elliott 2021-11-17 11:37:33 +00:00 Commented Nov 17, 2021 at 11:37 Add a comment | 9 $\begingroup$ One can often attribute this phenomenon to Mertens' product formula: $$ \prod_{p\le x}\left(1-\frac1p\right)\sim{e^{-\gamma}\over\log x} $$ Using this fact, we can deduce a minimal order for Euler's totient. That is $$ \liminf_{n\to\infty}{\varphi(n)\log\log n\over n}=e^{-\gamma} $$ Using similar tricks, one can obtain a maximal order for divisor sum function: $$ \limsup_{n\to\infty}{\sigma(n)\over n\log\log n}=e^\gamma $$ Proofs of these results are available in Hardy & Wright's An Introduction to the Theory of Numbers and Tenenbaum's Introduction to Analytic and Probabilistic Number Theory. Share Improve this answer answered Nov 16, 2021 at 16:16 TravorLZHTravorLZH 1,41766 silver badges1313 bronze badges $\endgroup$ Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions nt.number-theory analytic-number-theory See similar questions with these tags. Featured on Meta Spevacus has joined us as a Community Manager Introducing a new proactive anti-spam measure Linked Number of elements in the set ${1,\cdots,n}\cdot{1,\cdots,n}$ Related Overview of the interplay of Harmonic Analysis and Number Theory Least prime in an arithmetic progression and the Selberg sieve 17 Smoothed exponential sums: bounds and sources? Why do Maynard-Tao weights succeed? Odlyzko's reformulation of Montgomery's pair correlation conjecture Known upper bounds for $\sum_\limits{p\equiv a \pmod q, p\leq x}\frac{1}{p}$ 2 Need some clarification to understand an inequality involving exponential sums Deriving inequality (8.9) from (8.8), in Iwaniec€“Kowalski €œAnalytic Number Theory€ Question feed
188886
https://dec41.user.srcf.net/notes/IA_M/vectors_and_matrices_thm_proof.pdf
Part IA — Vectors and Matrices Theorems with proof Based on lectures by N. Peake Notes taken by Dexter Chua Michaelmas 2014 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures. They are nowhere near accurate representations of what was actually lectured, and in particular, all errors are almost surely mine. Complex numbers Review of complex numbers, including complex conjugate, inverse, modulus, argument and Argand diagram. Informal treatment of complex logarithm, n-th roots and complex powers. de Moivre’s theorem. Vectors Review of elementary algebra of vectors in R3, including scalar product. Brief discussion of vectors in Rn and Cn; scalar product and the Cauchy-Schwarz inequality. Concepts of linear span, linear independence, subspaces, basis and dimension. Suffix notation: including summation convention, δij and εijk. Vector product and triple product: definition and geometrical interpretation. Solution of linear vector equations. Applications of vectors to geometry, including equations of lines, planes and spheres. Matrices Elementary algebra of 3 × 3 matrices, including determinants. Extension to n × n complex matrices. Trace, determinant, non-singular matrices and inverses. Matrices as linear transformations; examples of geometrical actions including rotations, reflections, dilations, shears; kernel and image. Simultaneous linear equations: matrix formulation; existence and uniqueness of solu-tions, geometric interpretation; Gaussian elimination. Symmetric, anti-symmetric, orthogonal, hermitian and unitary matrices. Decomposition of a general matrix into isotropic, symmetric trace-free and antisymmetric parts. Eigenvalues and Eigenvectors Eigenvalues and eigenvectors; geometric significance. Proof that eigenvalues of hermitian matrix are real, and that distinct eigenvalues give an orthogonal basis of eigenvectors. The effect of a general change of basis (similarity transformations). Diagonalization of general matrices: sufficient conditions; examples of matrices that cannot be diagonalized. Canonical forms for 2 × 2 matrices. Discussion of quadratic forms, including change of basis. Classification of conics, cartesian and polar forms. Rotation matrices and Lorentz transformations as transformation groups. 1 Contents IA Vectors and Matrices (Theorems with proof) Contents 0 Introduction 4 1 Complex numbers 5 1.1 Basic properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2 Complex exponential function . . . . . . . . . . . . . . . . . . . . 5 1.3 Roots of unity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 Complex logarithm and power . . . . . . . . . . . . . . . . . . . . 6 1.5 De Moivre’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.6 Lines and circles in C . . . . . . . . . . . . . . . . . . . . . . . . 7 2 Vectors 8 2.1 Definition and basic properties . . . . . . . . . . . . . . . . . . . 8 2.2 Scalar product . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.1 Geometric picture (R2 and R3 only) . . . . . . . . . . . . 8 2.2.2 General algebraic definition . . . . . . . . . . . . . . . . . 8 2.3 Cauchy-Schwarz inequality . . . . . . . . . . . . . . . . . . . . . . 8 2.4 Vector product . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.5 Scalar triple product . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.6 Spanning sets and bases . . . . . . . . . . . . . . . . . . . . . . . 9 2.6.1 2D space . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.6.2 3D space . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.6.3 Rn space . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.6.4 Cn space . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.7 Vector subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.8 Suffix notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.9 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.9.1 Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.9.2 Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.10 Vector equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3 Linear maps 12 3.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.1.1 Rotation in R3 . . . . . . . . . . . . . . . . . . . . . . . . 12 3.1.2 Reflection in R3 . . . . . . . . . . . . . . . . . . . . . . . 12 3.2 Linear Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.3 Rank and nullity . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.4 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.4.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.4.2 Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . 13 3.4.3 Decomposition of an n × n matrix . . . . . . . . . . . . . 13 3.4.4 Matrix inverse . . . . . . . . . . . . . . . . . . . . . . . . 13 3.5 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.5.1 Permutations . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.5.2 Properties of determinants . . . . . . . . . . . . . . . . . 14 3.5.3 Minors and Cofactors . . . . . . . . . . . . . . . . . . . . 16 2 Contents IA Vectors and Matrices (Theorems with proof) 4 Matrices and linear equations 17 4.1 Simple example, 2 × 2 . . . . . . . . . . . . . . . . . . . . . . . . 17 4.2 Inverse of an n × n matrix . . . . . . . . . . . . . . . . . . . . . . 17 4.3 Homogeneous and inhomogeneous equations . . . . . . . . . . . . 17 4.3.1 Gaussian elimination . . . . . . . . . . . . . . . . . . . . . 17 4.4 Matrix rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.5 Homogeneous problem Ax = 0 . . . . . . . . . . . . . . . . . . . 18 4.5.1 Geometrical interpretation . . . . . . . . . . . . . . . . . . 18 4.5.2 Linear mapping view of Ax = 0 . . . . . . . . . . . . . . . 18 4.6 General solution of Ax = d . . . . . . . . . . . . . . . . . . . . . 18 5 Eigenvalues and eigenvectors 19 5.1 Preliminaries and definitions . . . . . . . . . . . . . . . . . . . . . 19 5.2 Linearly independent eigenvectors . . . . . . . . . . . . . . . . . . 19 5.3 Transformation matrices . . . . . . . . . . . . . . . . . . . . . . . 19 5.3.1 Transformation law for vectors . . . . . . . . . . . . . . . 19 5.3.2 Transformation law for matrix . . . . . . . . . . . . . . . 20 5.4 Similar matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5.5 Diagonalizable matrices . . . . . . . . . . . . . . . . . . . . . . . 20 5.6 Canonical (Jordan normal) form . . . . . . . . . . . . . . . . . . 21 5.7 Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . . . . . . 22 5.8 Eigenvalues and eigenvectors of a Hermitian matrix . . . . . . . . 22 5.8.1 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . 22 5.8.2 Gram-Schmidt orthogonalization (non-examinable) . . . . 23 5.8.3 Unitary transformation . . . . . . . . . . . . . . . . . . . 23 5.8.4 Diagonalization of n × n Hermitian matrices . . . . . . . 23 5.8.5 Normal matrices . . . . . . . . . . . . . . . . . . . . . . . 24 6 Quadratic forms and conics 25 6.1 Quadrics and conics . . . . . . . . . . . . . . . . . . . . . . . . . 25 6.1.1 Quadrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 6.1.2 Conic sections (n = 2) . . . . . . . . . . . . . . . . . . . . 25 6.2 Focus-directrix property . . . . . . . . . . . . . . . . . . . . . . . 25 7 Transformation groups 26 7.1 Groups of orthogonal matrices . . . . . . . . . . . . . . . . . . . 26 7.2 Length preserving matrices . . . . . . . . . . . . . . . . . . . . . 26 7.3 Lorentz transformations . . . . . . . . . . . . . . . . . . . . . . . 26 3 0 Introduction IA Vectors and Matrices (Theorems with proof) 0 Introduction 4 1 Complex numbers IA Vectors and Matrices (Theorems with proof) 1 Complex numbers 1.1 Basic properties Proposition. z¯ z = a2 + b2 = |z|2. Proposition. z−1 = ¯ z/|z|2. Theorem (Triangle inequality). For all z1, z2 ∈C, we have |z1 + z2| ≤|z1| + |z2|. Alternatively, we have |z1 −z2| ≥||z1| −|z2||. 1.2 Complex exponential function Lemma. ∞ X n=0 ∞ X m=0 amn = ∞ X r=0 r X m=0 ar−m,m Proof. ∞ X n=0 ∞ X m=0 amn = a00 + a01 + a02 + · · · + a10 + a11 + a12 + · · · + a20 + a21 + a22 + · · · = (a00) + (a10 + a01) + (a20 + a11 + a02) + · · · = ∞ X r=0 r X m=0 ar−m,m Theorem. exp(z1) exp(z2) = exp(z1 + z2) Proof. exp(z1) exp(z2) = ∞ X n=0 ∞ X m=0 zm 1 m! zn 2 n! = ∞ X r=0 r X m=0 zr−m 1 (r −m)! zm 2 m! = ∞ X r=0 1 r! r X m=0 r! (r −m)!m!zr−m 1 zm 2 = ∞ X r=0 (z1 + z2)r r! Theorem. eiz = cos z + i sin z. 5 1 Complex numbers IA Vectors and Matrices (Theorems with proof) Proof. eiz = ∞ X n=0 in n!zn = ∞ X n=0 i2n (2n)!z2n + ∞ X n=0 i2n+1 (2n + 1)!z2n+1 = ∞ X n=0 (−1)n (2n)! z2n + i ∞ X n=0 (−1)n (2n + 1)!z2n+1 = cos z + i sin z 1.3 Roots of unity Proposition. If ω = exp 2πi n  , then 1 + ω + ω2 + · · · + ωn−1 = 0 Proof. Two proofs are provided: (i) Consider the equation zn = 1. The coefficient of zn−1 is the sum of all roots. Since the coefficient of zn−1 is 0, then the sum of all roots = 1 + ω + ω2 + · · · + ωn−1 = 0. (ii) Since ωn −1 = (ω −1)(1 + ω + · · · + ωn−1) and ω ̸= 1, dividing by (ω −1), we have 1 + ω + · · · + ωn−1 = (ωn −1)/(ω −1) = 0. 1.4 Complex logarithm and power 1.5 De Moivre’s theorem Theorem (De Moivre’s theorem). cos nθ + i sin nθ = (cos θ + i sin θ)n. Proof. First prove for the n ≥0 case by induction. The n = 0 case is true since it merely reads 1 = 1. We then have (cos θ + i sin θ)n+1 = (cos θ + i sin θ)n(cos θ + i sin θ) = (cos nθ + i sin nθ)(cos θ + i sin θ) = cos(n + 1)θ + i sin(n + 1)θ If n < 0, let m = −n. Then m > 0 and (cosθ + i sin θ)−m = (cos mθ + i sin mθ)−1 = cos mθ −i sin mθ (cos mθ + i sin mθ)(cos mθ −i sin mθ) = cos(−mθ) + i sin(−mθ) cos2 mθ + sin2 mθ = cos(−mθ) + i sin(−mθ) = cos nθ + i sin nθ 6 1 Complex numbers IA Vectors and Matrices (Theorems with proof) 1.6 Lines and circles in C Theorem (Equation of straight line). The equation of a straight line through z0 and parallel to w is given by z ¯ w −¯ zw = z0 ¯ w −¯ z0w. Theorem. The general equation of a circle with center c ∈C and radius ρ ∈R+ can be given by z¯ z −¯ cz −c¯ z = ρ2 −c¯ c. 7 2 Vectors IA Vectors and Matrices (Theorems with proof) 2 Vectors 2.1 Definition and basic properties 2.2 Scalar product 2.2.1 Geometric picture (R2 and R3 only) 2.2.2 General algebraic definition 2.3 Cauchy-Schwarz inequality Theorem (Cauchy-Schwarz inequality). For all x, y ∈Rn, |x · y| ≤|x||y|. Proof. Consider the expression |x −λy|2. We must have |x −λy|2 ≥0 (x −λy) · (x −λy) ≥0 λ2|y|2 −λ(2x · y) + |x|2 ≥0. Viewing this as a quadratic in λ, we see that the quadratic is non-negative and thus cannot have 2 real roots. Thus the discriminant ∆≤0. So 4(x · y)2 ≤4|y|2|x|2 (x · y)2 ≤|x|2|y|2 |x · y| ≤|x||y|. Corollary (Triangle inequality). |x + y| ≤|x| + |y|. Proof. |x + y|2 = (x + y) · (x + y) = |x|2 + 2x · y + |y|2 ≤|x|2 + 2|x||y| + |y|2 = (|x| + |y|)2. So |x + y| ≤|x| + |y|. 2.4 Vector product Proposition. a × b = (a1ˆ i + a2ˆ j + a3ˆ k) × (b1ˆ i + b2ˆ j + b3ˆ k) = (a2b3 −a3b2)ˆ i + · · · = ˆ i ˆ j ˆ k a1 a2 a3 b1 b2 b3 8 2 Vectors IA Vectors and Matrices (Theorems with proof) 2.5 Scalar triple product Proposition. If a parallelepiped has sides represented by vectors a, b, c that form a right-handed system, then the volume of the parallelepiped is given by [a, b, c]. Proof. The area of the base of the parallelepiped is given by |b||c| sin θ = |b × c|. Thus the volume= |b × c||a| cos φ = |a · (b × c)|, where φ is the angle between a and the normal to b and c. However, since a, b, c form a right-handed system, we have a · (b × c) ≥0. Therefore the volume is a · (b × c). Theorem. a × (b + c) = a × b + a × c. Proof. Let d = a × (b + c) −a × b −a × c. We have d · d = d · [a × (b + c)] −d · (a × b) −d · (a × c) = (b + c) · (d × a) −b · (d × a) −c · (d × a) = 0 Thus d = 0. 2.6 Spanning sets and bases 2.6.1 2D space Theorem. The coefficients λ, µ are unique. Proof. Suppose that r = λa + µb = λ′a + µ′b. Take the vector product with a on both sides to get (µ −µ′)a × b = 0. Since a × b ̸= 0, then µ = µ′. Similarly, λ = λ′. 2.6.2 3D space Theorem. If a, b, c ∈R3 are non-coplanar, i.e. a · (b × c) ̸= 0, then they form a basis of R3. Proof. For any r, write r = λa + µb + νc. Performing the scalar product with b × c on both sides, one obtains r · (b × c) = λa · (b × c) + µb · (b × c) + νc · (b × c) = λ[a, b, c]. Thus λ = [r, b, c]/[a, b, c]. The values of µ and ν can be found similarly. Thus each r can be written as a linear combination of a, b and c. By the formula derived above, it follows that if αa + βb + γc = 0, then α = β = γ = 0. Thus they are linearly independent. 2.6.3 Rn space 2.6.4 Cn space 2.7 Vector subspaces 2.8 Suffix notation Proposition. (a × b)i = εijkajbk Proof. By expansion of formula 9 2 Vectors IA Vectors and Matrices (Theorems with proof) Theorem. εijkεipq = δjpδkq −δjqδkp Proof. Proof by exhaustion: RHS =      +1 if j = p and k = q −1 if j = q and k = p 0 otherwise LHS: Summing over i, the only non-zero terms are when j, k ̸= i and p, q ̸= i. If j = p and k = q, LHS is (−1)2 or (+1)2 = 1. If j = q and k = p, LHS is (+1)(−1) or (−1)(+1) = −1. All other possibilities result in 0. Proposition. a · (b × c) = b · (c × a) Proof. In suffix notation, we have a · (b × c) = ai(b × c)i = εijkbjckai = εjkibjckai = b · (c × a). Theorem (Vector triple product). a × (b × c) = (a · c)b −(a · b)c. Proof. [a × (b × c)]i = εijkaj(b × c)k = εijkεkpqajbpcq = εijkεpqkajbpcq = (δipδjq −δiqδjp)ajbpcq = ajbicj −ajcibj = (a · c)bi −(a · b)ci Proposition. (a × b) · (a × c) = (a · a)(b · c) −(a · b)(a · c). Proof. LHS = (a × b)i(a × c)i = εijkajbkεipqapcq = (δjpδkq −δjqδkp)ajbkapcq = ajbkajck −ajbkakcj = (a · a)(b · c) −(a · b)(a · c) 2.9 Geometry 2.9.1 Lines Theorem. The equation of a straight line through a and parallel to t is (x −a) × t = 0 or x × t = a × t. 10 2 Vectors IA Vectors and Matrices (Theorems with proof) 2.9.2 Plane Theorem. The equation of a plane through b with normal n is given by x · n = b · n. 2.10 Vector equations 11 3 Linear maps IA Vectors and Matrices (Theorems with proof) 3 Linear maps 3.1 Examples 3.1.1 Rotation in R3 3.1.2 Reflection in R3 3.2 Linear Maps Theorem. Consider a linear map f : U →V , where U, V are vector spaces. Then im(f) is a subspace of V , and ker(f) is a subspace of U. Proof. Both are non-empty since f(0) = 0. If x, y ∈im(f), then ∃a, b ∈U such that x = f(a), y = f(b). Then λx + µy = λf(a) + µf(b) = f(λa + µb). Now λa + µb ∈U since U is a vector space, so there is an element in U that maps to λx + µy. So λx + µy ∈im(f) and im(f) is a subspace of V . Suppose x, y ∈ker(f), i.e. f(x) = f(y) = 0. Then f(λx + µy) = λf(x) + µf(y) = λ0 + µ0 = 0. Therefore λx + µy ∈ker(f). 3.3 Rank and nullity Theorem (Rank-nullity theorem). For a linear map f : U →V , r(f) + n(f) = dim(U). Proof. (Non-examinable) Write dim(U) = n and n(f) = m. If m = n, then f is the zero map, and the proof is trivial, since r(f) = 0. Otherwise, assume m < n. Suppose {e1, e2, · · · , em} is a basis of ker f, Extend this to a basis of the whole of U to get {e1, e2, · · · , em, em+1, · · · , en}. To prove the theorem, we need to prove that {f(em+1), f(em+2), · · · f(en)} is a basis of im(f). (i) First show that it spans im(f). Take y ∈im(f). Thus ∃x ∈U such that y = f(x). Then y = f(α1e1 + α2e2 + · · · + αnen), since e1, · · · en is a basis of U. Thus y = α1f(e1) + α2f(e2) + · · · + αmf(em) + αm+1f(em+1) + · · · + αnf(en). The first m terms map to 0, since e1, · · · em is the basis of the kernel of f. Thus y = αm+1f(em+1) + · · · + αnf(en). (ii) To show that they are linearly independent, suppose αm+1f(em+1) + · · · + αnf(en) = 0. Then f(αm+1em+1 + · · · + αnen) = 0. 12 3 Linear maps IA Vectors and Matrices (Theorems with proof) Thus αm+1em+1 + · · · + αnen ∈ker(f). Since {e1, · · · , em} span ker(f), there exist some α1, α2, · · · αm such that αm+1em+1 + · · · + αnen = α1e1 + · · · + αmem. But e1 · · · en is a basis of U and are linearly independent. So αi = 0 for all i. Then the only solution to the equation αm+1f(em+1) + · · · + αnf(en) = 0 is αi = 0, and they are linearly independent by definition. 3.4 Matrices 3.4.1 Examples 3.4.2 Matrix Algebra Proposition. (i) (AT )T = A. (ii) If x is a column vector      x1 x2 . . . xn     , xT is a row vector (x1 x2 · · · xn). (iii) (AB)T = BT AT since (AB)T ij = (AB)ji = AjkBki = BkiAjk = (BT )ik(AT )kj = (BT AT )ij. Proposition. tr(BC) = tr(CB) Proof. tr(BC) = BikCki = CkiBik = (CB)kk = tr(CB) 3.4.3 Decomposition of an n × n matrix 3.4.4 Matrix inverse Proposition. (AB)−1 = B−1A−1 Proof. (B−1A−1)(AB) = B−1(A−1A)B = B−1B = I. 3.5 Determinants 3.5.1 Permutations Proposition. Any q-cycle can be written as a product of 2-cycles. Proof. (1 2 3 · · · n) = (1 2)(2 3)(3 4) · · · (n −1 n). Proposition. a b c d = ad −bc 13 3 Linear maps IA Vectors and Matrices (Theorems with proof) 3.5.2 Properties of determinants Proposition. det(A) = det(AT ). Proof. Take a single term Aσ(1)1Aσ(2)2 · · · Aσ(n)n and let ρ be another permuta-tion in Sn. We have Aσ(1)1Aσ(2)2 · · · Aσ(n)n = Aσ(ρ(1))ρ(1)Aσ(ρ(2))ρ(2) · · · Aσ(ρ(n))ρ(n) since the right hand side is just re-ordering the order of multiplication. Choose ρ = σ−1 and note that ε(σ) = ε(ρ). Then det(A) = X ρ∈Sn ε(ρ)A1ρ(1)A2ρ(2) · · · Anρ(n) = det(AT ). Proposition. If matrix B is formed by multiplying every element in a single row of A by a scalar λ, then det(B) = λ det(A). Consequently, det(λA) = λn det(A). Proof. Each term in the sum is multiplied by λ, so the whole sum is multiplied by λn. Proposition. If 2 rows (or 2 columns) of A are identical, the determinant is 0. Proof. wlog, suppose columns 1 and 2 are the same. Then det(A) = X σ∈Sn ε(σ)Aσ(1)1Aσ(2)2 · · · Aσ(n)n. Now write an arbitrary σ in the form σ = ρ(1 2). Then ε(σ) = ε(ρ)ε((1 2)) = −ε(ρ). So det(A) = X ρ∈Sn −ε(ρ)Aρ(2)1Aρ(1)2Aρ(3)3 · · · Aρ(n)n. But columns 1 and 2 are identical, so Aρ(2)1 = Aρ(2)2 and Aρ(1)2 = Aρ(1)1. So det(A) = −det(A) and det(A) = 0. Proposition. If 2 rows or 2 columns of a matrix are linearly dependent, then the determinant is zero. Proof. Suppose in A, (column r) + λ(column s) = 0. Define Bij = ( Aij j ̸= r Aij + λAis j = r . Then det(B) = det(A) + λ det(matrix with column r = column s) = det(A). Then we can see that the rth column of B is all zeroes. So each term in the sum contains one zero and det(A) = det(B) = 0. Proposition. Given a matrix A, if B is a matrix obtained by adding a multiple of a column (or row) of A to another column (or row) of A, then det A = det B. Corollary. Swapping two rows or columns of a matrix negates the determinant. 14 3 Linear maps IA Vectors and Matrices (Theorems with proof) Proof. We do the column case only. Let A = (a1 · · · ai · · · aj · · · an). Then det(a1 · · · ai · · · aj · · · an) = det(a1 · · · ai + aj · · · aj · · · an) = det(a1 · · · ai + aj · · · aj −(ai + aj) · · · an) = det(a1 · · · ai + aj · · · −ai · · · an) = det(a1 · · · aj · · · −ai · · · an) = −det(a1 · · · aj · · · ai · · · an) Alternatively, we can prove this from the definition directly, using the fact that the sign of a transposition is −1 (and that the sign is multiplicative). Proposition. det(AB) = det(A) det(B). Proof. First note that P σ ε(σ)Aσ(1)ρ(1)Aσ(2)ρ(2) = ε(ρ) det(A), i.e. swapping columns (or rows) an even/odd number of times gives a factor ±1 respectively. We can prove this by writing σ = µρ. Now det AB = X σ ε(σ)(AB)σ(1)1(AB)σ(2)2 · · · (AB)σ(n)n = X σ ε(σ) n X k1,k2,··· ,kn Aσ(1)k1Bk11 · · · Aσ(n)knBknn = X k1,··· ,kn Bk11 · · · Bknn X σ ε(σ)Aσ(1)k1Aσ(2)k2 · · · Aσ(n)kn | {z } S Now consider the many different S’s. If in S, two of k1 and kn are equal, then S is a determinant of a matrix with two columns the same, i.e. S = 0. So we only have to consider the sum over distinct kis. Thus the kis are are a permutation of 1, · · · n, say ki = ρ(i). Then we can write det AB = X ρ Bρ(1)1 · · · Bρ(n)n X σ ε(σ)Aσ(1)ρ(1) · · · Aσ(n)ρ(n) = X ρ Bρ(1)1 · · · Bρ(n)n(ε(ρ) det A) = det A X ρ ε(ρ)Bρ(1)1 · · · Bρ(n)n = det A det B Corollary. If A is orthogonal, det A = ±1. Proof. AAT = I det AAT = det I det A det AT = 1 (det A)2 = 1 det A = ±1 15 3 Linear maps IA Vectors and Matrices (Theorems with proof) Corollary. If U is unitary, | det U| = 1. Proof. We have det U † = (det U T )∗= det(U)∗. Since UU † = I, we have det(U) det(U)∗= 1. Proposition. In R3, orthogonal matrices represent either a rotation (det = 1) or a reflection (det = −1). 3.5.3 Minors and Cofactors Theorem (Laplace expansion formula). For any particular fixed i, det A = n X j=1 Aji∆ji. Proof. det A = n X ji=1 Ajii n X j1,··· ,ji,···jn εj1j2···jnAj11Aj22 · · · Ajii · · · Ajnn Let σ ∈Sn be the permutation which moves ji to the ith position, and leave everything else in its natural order, i.e. σ = 1 · · · i i + 1 i + 2 · · · ji −1 ji ji + 1 · · · n 1 · · · ji i i + 1 · · · ji −2 ji −1 ji + 1 · · · n  if ji > i, and similarly for other cases. To perform this permutation, |i −ji| transpositions are made. So ε(σ) = (−1)i−ji. Now consider the permutation ρ ∈Sn ρ =  1 · · · · · · ¯ ji · · · n j1 · · · ¯ ji · · · · · · jn  The composition ρσ reorders (1, · · · , n) to (j1, j2, · · · , jn). So ε(ρσ) = εj1···jn = ε(ρ)ε(σ) = (−1)i−jiεj1···¯ ji···jn. Hence the original equation becomes det A = n X ji=1 Ajii X j1···¯ ji···jn (−1)i−jiεj1···¯ ji···jnAj11 · · · Ajii · · · Ajnn = n X ji=1 Ajii(−1)i−jiMjii = n X ji=1 Ajii∆jii = n X j=1 Aji∆ji 16 4 Matrices and linear equations IA Vectors and Matrices (Theorems with proof) 4 Matrices and linear equations 4.1 Simple example, 2 × 2 4.2 Inverse of an n × n matrix Lemma. P Aik∆jk = δij det A. Proof. If i ̸= j, then consider an n × n matrix B, which is identical to A except the jth row is replaced by the ith row of A. So ∆jk of B = ∆jk of A, since ∆jk does not depend on the elements in row j. Since B has a duplicate row, we know that 0 = det B = n X k=1 Bjk∆jk = n X k=1 Aik∆jk. If i = j, then the expression is det A by the Laplace expansion formula. Theorem. If det A ̸= 0, then A−1 exists and is given by (A−1)ij = ∆ji det A. Proof. (A−1)ikAkj = ∆ki det AAkj = δij det A det A = δij. So A−1A = I. 4.3 Homogeneous and inhomogeneous equations 4.3.1 Gaussian elimination 4.4 Matrix rank Theorem. The column rank and row rank are equal for any m × n matrix. Proof. Let r be the row rank of A. Write the biggest set of linearly independent rows as vT 1 , vT 2 , · · · vT r or in component form vT k = (vk1, vk2, · · · , vkn) for k = 1, 2, · · · , r. Now denote the ith row of A as rT i = (Ai1, Ai2, · · · Ain). Note that every row of A can be written as a linear combination of the v’s. (If ri cannot be written as a linear combination of the v’s, then it is independent of the v’s and v is not the maximum collection of linearly independent rows) Write rT i = r X k=1 CikvT k . For some coefficients Cik with 1 ≤i ≤m and 1 ≤k ≤r. Now the elements of A are Aij = (ri)T j = r X k=1 Cik(vk)j, 17 4 Matrices and linear equations IA Vectors and Matrices (Theorems with proof) or      A1j A2j . . . Amj     = r X k=1 vkj      C1k C2k . . . Cmk      So every column of A can be written as a linear combination of the r column vectors ck. Then the column rank of A ≤r, the row rank of A. Apply the same argument to AT to see that the row rank is ≤the column rank. 4.5 Homogeneous problem Ax = 0 4.5.1 Geometrical interpretation 4.5.2 Linear mapping view of Ax = 0 4.6 General solution of Ax = d 18 5 Eigenvalues and eigenvectors IA Vectors and Matrices (Theorems with proof) 5 Eigenvalues and eigenvectors 5.1 Preliminaries and definitions Theorem (Fundamental theorem of algebra). Let p(z) be a polynomial of degree m ≥1, i.e. p(z) = m X j=0 cjzj, where cj ∈C and cm ̸= 0. Then p(z) = 0 has precisely m (not necessarily distinct) roots in the complex plane, accounting for multiplicity. Theorem. λ is an eigenvalue of A iff det(A −λI) = 0. Proof. (⇒) Suppose that λ is an eigenvalue and x is the associated eigenvector. We can rearrange the equation in the definition above to (A −λI)x = 0 and thus x ∈ker(A −λI) But x ̸= 0. So ker(A−λI) is non-trivial and det(A−λI) = 0. The (⇐) direction is similar. 5.2 Linearly independent eigenvectors Theorem. Suppose n×n matrix A has distinct eigenvalues λ1, λ2, · · · , λn. Then the corresponding eigenvectors x1, x2, · · · , xn are linearly independent. Proof. Proof by contradiction: Suppose x1, x2, · · · , xn are linearly dependent. Then we can find non-zero constants di for i = 1, 2, · · · , r, such that d1x1 + d2x2 + · · · + drxr = 0. Suppose that this is the shortest non-trivial linear combination that gives 0 (we may need to re-order xi). Now apply (A −λ1I) to the whole equation to obtain d1(λ1 −λ1)x1 + d2(λ2 −λ1)x2 + · · · + dr(λr −λ1)xr = 0. We know that the first term is 0, while the others are not (since we assumed λi ̸= λj for i ̸= j). So d2(λ2 −λ1)x2 + · · · + dr(λr −λ1)xr = 0, and we have found a shorter linear combination that gives 0. Contradiction. 5.3 Transformation matrices 5.3.1 Transformation law for vectors Theorem. Denote vector as u with respect to {ei} and ˜ u with respect to { ˜ ei}. Then u = P ˜ u and ˜ u = P −1u 19 5 Eigenvalues and eigenvectors IA Vectors and Matrices (Theorems with proof) 5.3.2 Transformation law for matrix Theorem. ˜ A = P −1AP. 5.4 Similar matrices Proposition. Similar matrices have the following properties: (i) Similar matrices have the same determinant. (ii) Similar matrices have the same trace. (iii) Similar matrices have the same characteristic polynomial. Proof. They are proven as follows: (i) det B = det(P −1AP) = (det A)(det P)−1(det P) = det A (ii) tr B = Bii = P −1 ij AjkPki = AjkPkiP −1 ij = Ajk(PP −1)kj = Ajkδkj = Ajj = tr A (iii) pB(λ) = det(B −λI) = det(P −1AP −λI) = det(P −1AP −λP −1IP) = det(P −1(A −λI)P) = det(A −λI) = pA(λ) 5.5 Diagonalizable matrices Theorem. Let λ1, λ2, · · · , λr, with r ≤n be the distinct eigenvalues of A. Let B1, B2, · · · Br be the bases of the eigenspaces Eλ1, Eλ2, · · · , Eλr correspondingly. Then the set B = r [ i=1 Bi is linearly independent. Proof. Write B1 = {x(1) 1 , x(1) 2 , · · · x(1) m(λ1)}. Then m(λ1) = dim(Eλ1), and simi-larly for all Bi. 20 5 Eigenvalues and eigenvectors IA Vectors and Matrices (Theorems with proof) Consider the following general linear combination of all elements in B. Con-sider the equation r X i=1 m(λi) X j=1 αijx(i) j = 0. The first sum is summing over all eigenspaces, and the second sum sums over the basis vectors in Bi. Now apply the matrix Y k=1,2,··· , ¯ K,··· ,r (A −λkI) to the above sum, for some arbitrary K. We obtain m(λK) X j=1 αKj   Y k=1,2,··· , ¯ K,··· ,r (λK −λk)  x(K) j = 0. Since the x(K) j are linearly independent (BK is a basis), αKj = 0 for all j. Since K was arbitrary, all αij must be zero. So B is linearly independent. Proposition. A is diagonalizable iffall its eigenvalues have zero defect. 5.6 Canonical (Jordan normal) form Theorem. Any 2 × 2 complex matrix A is similar to exactly one of  λ1 0 0 λ2  ,  λ 0 0 λ  ,  λ 1 0 λ  Proof. For each case: (i) If A has two distinct eigenvalues, then eigenvectors are linearly independent. Then we can use P formed from eigenvectors as its columns (ii) If λ1 = λ2 = λ and dim Eλ = 2, then write Eλ = span{u, v}, with u, v linearly independent. Now use {u, v} as a new basis of C2 and ˜ A = P −1AP =  λ 0 0 λ  = λI Note that since P −1AP = λI, we have A = P(λI)P −1 = λI. So A is isotropic, i.e. the same with respect to any basis. (iii) If λ1 = λ2 = λ and dim(Eλ) = 1, then Eλ = span{v}. Now choose basis of C2 as {v, w}, where w ∈C2 \ Eλ. We know that Aw ∈C2. So Aw = αv + βw. Hence, if we change basis to {v, w}, then ˜ A = P −1AP = λ α 0 β  . However, A and ˜ A both have eigenvalue λ with algebraic multiplicity 2. So we must have β = λ. To make α = 1, let u = ( ˜ A −λI)w. We know u ̸= 0 since w is not in the eigenspace. Then ( ˜ A −λI)u = ( ˜ A −λI)2w = 0 α 0 0  0 α 0 0  w = 0. 21 5 Eigenvalues and eigenvectors IA Vectors and Matrices (Theorems with proof) So u is an eigenvector of ˜ A with eigenvalue λ. We have u = ˜ Aw −λw. So ˜ Aw = u + λw. Change basis to {u, w}. Then A with respect to this basis is  λ 1 0 λ  . This is a two-stage process: P sends basis to {v, w} and then matrix Q sends to basis {u, w}. So the similarity transformation is Q−1(P −1AP)Q = (PQ)−1A(PQ). Proposition. (Without proof) The canonical form, or Jordan normal form, exists for any n × n matrix A. Specifically, there exists a similarity transform such that A is similar to a matrix to ˜ A that satisfies the following properties: (i) ˜ Aαα = λα, i.e. the diagonal composes of the eigenvalues. (ii) ˜ Aα,α+1 = 0 or 1. (iii) ˜ Aij = 0 otherwise. 5.7 Cayley-Hamilton Theorem Theorem (Cayley-Hamilton theorem). Every n × n complex matrix satisfies its own characteristic equation. Proof. We will only prove for diagonalizable matrices here. So suppose for our matrix A, there is some P such that D = diag(λ1, λ2, · · · , λn) = P −1AP. Note that Di = (P −1AP)(P −1AP) · · · (P −1AP) = P −1AiP. Hence pD(D) = pD(P −1AP) = P −1[pD(A)]P. Since similar matrices have the same characteristic polynomial. So pA(D) = P −1[pA(A)]P. However, we also know that Di = diag(λi 1, λi 2, · · · λi n). So pA(D) = diag(pA(λ1), pA(λ2), · · · , pA(λn)) = diag(0, 0, · · · , 0) since the eigenvalues are roots of pA(λ) = 0. So 0 = pA(D) = P −1pA(A)P and thus pA(A) = 0. 5.8 Eigenvalues and eigenvectors of a Hermitian matrix 5.8.1 Eigenvalues and eigenvectors Theorem. The eigenvalues of a Hermitian matrix H are real. Proof. Suppose that H has eigenvalue λ with eigenvector v ̸= 0. Then Hv = λv. We pre-multiply by v†, a 1 × n row vector, to obtain v†Hv = λv†v (∗) 22 5 Eigenvalues and eigenvectors IA Vectors and Matrices (Theorems with proof) We take the Hermitian conjugate of both sides. The left hand side is (v†Hv)† = v†H†v = v†Hv since H is Hermitian. The right hand side is (λv†v)† = λ∗v†v So we have v†Hv = λ∗v†v. From (∗), we know that λv†v = λ∗v†v. Since v ̸= 0, we know that v†v = v · v ̸= 0. So λ = λ∗and λ is real. Theorem. The eigenvectors of a Hermitian matrix H corresponding to distinct eigenvalues are orthogonal. Proof. Let Hvi = λivi (i) Hvj = λjvj. (ii) Pre-multiply (i) by v† j to obtain v† jHvi = λiv† jvi. (iii) Pre-multiply (ii) by v† i and take the Hermitian conjugate to obtain v† jHvi = λjv† jvi. (iv) Equating (iii) and (iv) yields λiv† jvi = λjv† jvi. Since λi ̸= λj, we must have v† jvi = 0. So their inner product is zero and are orthogonal. 5.8.2 Gram-Schmidt orthogonalization (non-examinable) 5.8.3 Unitary transformation 5.8.4 Diagonalization of n × n Hermitian matrices Theorem. An n × n Hermitian matrix has precisely n orthogonal eigenvectors. Proof. (Non-examinable) Let λ1, λ2, · · · , λr be the distinct eigenvalues of H (r ≤ n), with a set of corresponding orthonormal eigenvectors B = {v1, v2, · · · , vr}. Extend to a basis of the whole of Cn B′ = {v1, v2, · · · , vr, w1, w2, · · · , wn−r} Now use Gram-Schmidt to create an orthonormal basis ˜ B = {v1, v2, · · · , vr, u1, u2, · · · , un−r}. 23 5 Eigenvalues and eigenvectors IA Vectors and Matrices (Theorems with proof) Now write P =   ↑ ↑ ↑ ↑ ↑ v1 v2 · · · vr u1 · · · un−r ↓ ↓ ↓ ↓ ↓   We have shown above that this is a unitary matrix, i.e. P −1 = P †. So if we change basis, we have P −1HP = P †HP =               λ1 0 · · · 0 0 0 · · · 0 0 λ2 · · · 0 0 0 · · · 0 . . . . . . ... . . . . . . . . . ... 0 0 0 · · · λr 0 0 · · · 0 0 0 · · · 0 c11 c12 · · · c1,n−r 0 0 · · · 0 c21 c22 · · · c2,n−r . . . . . . . . . . . . . . . . . . ... . . . 0 0 · · · 0 cn−r,1 cn−r,2 · · · cn−r,n−r               Here C is an (n −r) × (n −r) Hermitian matrix. The eigenvalues of C are also eigenvalues of H because det(H −λI) = det(P †HP −λI) = (λ1 −λ) · · · (λr − λ) det(C −λI). So the eigenvalues of C are the eigenvalues of H. We can keep repeating the process on C until we finish all rows. For example, if the eigenvalues of C are all distinct, there are n −r orthonormal eigenvectors wj (for j = r + 1, · · · , n) of C. Let Q =            1 1 ... 1 ↑ ↑ ↑ wr+1 wr+2 · · · wn ↓ ↓ ↓            with other entries 0. (where we have a r × r identity matrix block on the top left corner and a (n −r) × (n −r) with columns formed by wj) Since the columns of Q are orthonormal, Q is unitary. So Q†P †HPQ = diag(λ1, λ2, · · · , λr, λr+1, · · · , λn), where the first r λs are distinct and the re-maining ones are copies of previous ones. The n linearly-independent eigenvectors are the columns of PQ. 5.8.5 Normal matrices Proposition. (i) If λ is an eigenvalue of N, then λ∗is an eigenvalue of N †. (ii) The eigenvectors of distinct eigenvalues are orthogonal. (iii) A normal matrix can always be diagonalized with an orthonormal basis of eigenvectors. 24 6 Quadratic forms and conics IA Vectors and Matrices (Theorems with proof) 6 Quadratic forms and conics Theorem. Hermitian forms are real. Proof. (x†Hx)∗= (x†Hx)† = x†H†x = x†Hx. So (x†Hx)∗= x†Hx and it is real. 6.1 Quadrics and conics 6.1.1 Quadrics 6.1.2 Conic sections (n = 2) 6.2 Focus-directrix property 25 7 Transformation groups IA Vectors and Matrices (Theorems with proof) 7 Transformation groups 7.1 Groups of orthogonal matrices Proposition. The set of all n × n orthogonal matrices P forms a group under matrix multiplication. Proof. 0. If P, Q are orthogonal, then consider R = PQ. RRT = (PQ)(PQ)T = P(QQT )P T = PP T = I. So R is orthogonal. 1. I satisfies IIT = I. So I is orthogonal and is an identity of the group. 2. Inverse: if P is orthogonal, then P −1 = P T by definition, which is also orthogonal. 3. Matrix multiplication is associative since function composition is associative. 7.2 Length preserving matrices Theorem. Let P ∈O(n). Then the following are equivalent: (i) P is orthogonal (ii) |Px| = |x| (iii) (Px)T (Py) = xT y, i.e. (Px) · (Py) = x · y. (iv) If (v1, v2, · · · , vn) are orthonormal, so are (Pv1, Pv2, · · · , Pvn) (v) The columns of P are orthonormal. Proof. We do them one by one: (i) ⇒(ii): |Px|2 = (Px)T (Px) = xT P T Px = xT x = |x|2 (ii) ⇒(iii): |P(x + y)|2 = |x + y|2. The right hand side is (xT + yT )(x + y) = xT x + yT y + yT x + xT y = |x|2 + |y|2 + 2xT y. Similarly, the left hand side is |Px + Py|2 = |Px|2 + |Py| + 2(Px)T Py = |x|2 + |y|2 + 2(Px)T Py. So (Px)T Py = xT y. (iii) ⇒(iv): (Pvi)T Pvj = vT i vj = δij. So Pvi’s are also orthonormal. (iv) ⇒(v): Take the vi’s to be the standard basis. So the columns of P, being Pei, are orthonormal. (v) ⇒(i): The columns of P are orthonormal. Then (PP T )ij = PikPjk = (Pi) · (Pj) = δij, viewing Pi as the ith column of P. So PP T = I. 7.3 Lorentz transformations 26
188887
https://2024.sci-hub.se/2116/46909851d8ee46c1279eddfe8e9b0219/kunikiyo1994.pdf
A Monte Carlo simulation of anisotropic electron transport in silicon including full band structure and anisotropic impactionization model T. Kunikiyo, M. Takenaka, Y. Kamakura, M. Yamaji, H. Mizuno et al. Citation: J. Appl. Phys. 75, 297 (1994); doi: 10.1063/1.355849 View online: View Table of Contents: Published by the American Institute of Physics. Additional information on J. Appl. Phys. Journal Homepage: Journal Information: Top downloads: Information for Authors: Downloaded 08 May 2013 to 128.197.27.9. This article is copyrighted as indicated in the abstract. Reuse of AIP content is subject to the terms at: A Monte Carlo simulation of anisotropic electron transport in silicon including full band structure and anisotropic impact-ior;lization model T. Kunikiyo ULSI Laboratory, Mitsubishi Electric Corporation, 4-I Mizuhara, Itami City, Hyogo 464, Japan M. Takenaka PZW Development Laboratories, Sharp Corporation, 2613-l Ichinomoto-cho, Tenri City Nara 632, Japan Y. Kamakura, M. Yamaji, H. Mizuno, 4 M. Morifuji, K. Taniguchi, and C. Hamaguchi Department of Electronic Engineering, Osaka University, 2-I Yumada-Oka, Suita City, Osaka 565, Japan (Received 30 July 1993; accepted for publication 20 September 1993) The physics of electron transport in bulk silicon is investigated by using a newly developed Monte Carlo simulator which improves the state-of-the-art treatment of hot carrier transport. (1) The full band structure of the semiconductor was computed by using an empirical-pseudopotential method. (2) A phonon dispersion curve was obtained from an adiabatic bond-charge model. (3) Electron-phonon scattering was computed by using a rigid pseudo-ion model. The calculated scattering rate is consistent with the full band structure and the phonon dispersion curve of silicon, thus leaving no adjustable parameters such as deformation potential coefficients. (4) The impact-ionization rate was calculated by using Fermi’s golden rule directly from the full band structure. We took into account the dielectric function depending on both wave vector and transition energy in the numerical calculation of the rate. The impact-ionization rate obtained in the present study strongly depends on both wave vector and band index of the conduction electron, which is ignored by the traditional Keldysh formula. (5) In the simulator, the final state of a scattering electron is determined in such a way as to conserve both energy and momentum in scattering processes. The simulated results, under the steady-state conditions as well as under the nonequilibrium conditions, are presented and compared with experimental results. Special attention is focused on anisotropic transport during velocity overshoot. Quantitative agreement between calculated and experimental results confirms the validity of the newly developed Monte Carlo simulator and the physical models that were used. I. INTRODUCTION In modern submicrometer metal-oxide-semiconductor transistors (MOSFETs), lateral electric fields reach as much a9 severs1 hundred kV/cm since the drain voltage does not scale linearly with the channel length. The result- ing hot carriers pose a serious threat to the long-term re- liability of the devices, which have stimulated a large num- ber of experiments and simulations on hot carrier phenomena to elucidate device degradation versus gate current, substrate current, and hot carrier induced lumi- nescence at the gate edge.13 One of the key issues of hot carrier related degradation is to obtain the electron energy distribution. The Monte Carlo (MC) models can provide the de- tailed information on the energy distribution of hot carri- ers. In the h-1C models, the distribution of the-electrons is calculated by simulating their motion under the influence of applied electric field and scattering processes. The MC models are categorized into several groups in terms of the description of the energy band, the scattering formulae, and the adjustable parameters involved in these formulae, such as deformation pot.ential coefficients for electron- There have been reported three kinds of band structure models: ( 1) parabolic or nonparabolic bands described by analytical formulae,5+6 (2) multibands which are derived in such a way as to fit the den&y-of-states (DOS) and/or the group velocity calculated from a realistic band structure7-9 and (3) a full band structure obtained from empirical pseudopotential calculation.‘~-‘2 Single parabolic or non- parabolic band models are only applicable to those elec- trons with energy below 1 eV, which may be partly justified for narrow- and direct-band-gap semiconductors in low- energy regimes. These models, however, are inappropriate for wide- and indirect-band-gap semiconductors such as silicon. The equi-energy surface of the first conduction band crosses the first Brillouin zone edge at an energy of only about 0.13 eV above the band minima since its band minima is located at about 0.85 in units of 2~/a along the (100) direction, where ct is the lattice constant, i.e., a = 5.43 1 A for silicon. Thus, ellipsoidal approximation of equi-energy surface breaks down and the DOS calculated from these models is only valid below an energy of about 0.13 eV. Furthermore, these models fail to include the sec.- ond conduction band. ‘%esent address: ULSI Resrarch Center, Central Research Laboratory, The multiband models, representing the features of a Nitacbi Ltd., Kokubunji, Tokyo 185, hpdn. realistic energy band in analytical formulae, have been es- phonon scattering rate and prefactors in the ordinary Keldysh formula for impact-ionization rate. J. Appl. Phys. 75 (i), 1 January 1994 0021-8979/94/75(1)/297/16/$6.00 @ 1994 American institute of Physics 297 Downloaded 08 May 2013 to 128.197.27.9. This article is copyrighted as indicated in the abstract. Reuse of AIP content is subject to the terms at: tablished to enhance the computational efficiency. Unfor- tunately, these models do not adequately represent a real- istic energy band, the DOS, and the group velocity consistently without significant errors along all directions. For example, these models fail to represent star-shaped equi-energy surface originating from fourfold rotational symmetry around the (100) axes since these models only provide a circular equi-energy surface.’ The full band model is most accurate among these models and has been used in the MC simulation.‘“~l Al- though this model needs enormous computational time, it is worthwhile to use this model to assess the microscopic carrier transport because of its accuracy. There are three major scattering mechanisms in sili- con, i.e., scattering with ionized impurities, phonon scat- terings, and, at high-electric field, impact ionization. Pho- non scattering rates are, in general, expressed by the formula involving deformation potential coefficients, over- lap integral, and the DOS, which is a function of electron energy at final state after the transition. There exist many kinds of phonon scattering models in which the overlap integral is assumed to be unity6 and/or the DOS effect is ignored. Furthermore, a phonon dispersion relation, used for determination of electron energy in the final state after the transition, is assumed to be unity for optical phonon scatterings and be expressed by analytical formulae involv- ing adjustable parameters such as maximum phonon frequency. ’ i Unfortunately, t.hese analytical approaches fail to represent the anisotropic nature of the phonon dis- persion relation. The other central issue in hot carrier simulation is to clarify the probability of impact-ionization process, in which carriers are accelerated by a high-electric field until they gain sufficient energy to excite an electron from the valence band to the conduction band. The ionization prob- ability has been extensively studied by using analytical ap- proaches, such as the Keldysh formula,13 assuming a par- abolic or nonparabolic band structure, for both the conduction and the valence bands, even though that as- sumption breaks down for wide- and indirect-band-gap semiconductors such as silic.on. Furthermore, these analyt- ical approaches fail to include wave vector dependent ion- ization probability due to the anisotropic nature of the energy band structure in silicon. The adjustable parameters such as deformation poten- tial coefficients or the prefactors in the traditional Keldysh formula have been tuned in such a way that the available experimental data can be reproduced correctly. There still, however, remains large discrepancy in the values of the physical parameters in spite of numerous investigations. For example, the impact-ionization rate reported by Thoma et al. l4 is approximately three orders of magnitude larger than that by Fischetti et ~2.” Nevertheless, both of them reported that the simulated results agree well with experimental data. This means that these fitting ap- proaches, so called parameter physics, fail to represent ac- curately a feature of electron transport. The aim of this article is to investigate the validity of a newly developed Monte Carlo simulator which further im- proves the state-of-the-art treatment. of hot carrier tmns- port in bulk silicon. ( 1) The full band structure of silicon was computed by using an empirical pseudopotential method.15 (2) A real phonon dispersion relation was ob- tained from an adiabatic bond-charge model.‘61’7 (3) Electron-phonon scattering rate was computed by using a rigid pseudo-ion model. 18-22 The calculated scattering rate is consistent with the full band structure and the phonon dispersion relation of silicon, thus leaving no adjustable parameters such as deformation potential coefficients. (4) The impact-ionization rate was calculated by using Fermi’s golden rule from the full band structure. We took into account the dielectric function depending on both wave vector and transition energyz3 in the numerical calculation of the rate. The impact-ionization rate obtained in the present study shows the strong anisotropy, which is ig- nored by the traditional Keldysh formula.13 (5) In the simulator, the final state of the scattering electron is deter- mined in such a way as to conserve both energy and mo- mentum in scattering processes. The simulated results un- der the steady-state condition will be presented and compared with reported experimental results. Further- more, simulated results on anisotropic velocity overshoot phenomena will be also presented. This article is organized as follows. The MC models employed in the present study are shown in Sec. II. The computational procedure, especially the search procedure used to determine the final state of the electrons in scatter- ing processes, will be also presented. Results obtained in the present study will be shown together with reported experimental data and results from the previous MC works in Sec. III. Finally, we will offer some conclusions from the present study in Sec. IV. II. MODELS AND COMPUTATIONAL PROCEDURES A. Energy band structure, phonon dispersion relation, and scattering models A real energy band structure above 1 eV is necessary to investigate electron transport in silicon, especially at a high-electric field since analytical nonparabolic band struc- ture largely deviates from the real band above 1 eV. In the present study, the energy band structure of silicon is cal- culated by the use of the empirical local pseudopotential method, in which the periodic part of the Bloch wave func- tion is expanded with a basis set of reciprocal-lattice vec- tors G. We have employed a set of 113 G vectors for the expansion. The spin-orbit interactions are not included in the present calculation since they are very small (e.g., 0.044 eV). 24 Furthermore, since the form factors of the pseudopotential presented in literature’s give narrower band-gap than experimental results, we have extracted new form factors in comparison with experimental results,“5-27 especially with the energy band gap and energy at X, L, and I’ points. The form factors determined by using the steepest descent method2’ are shown in Table I. Calculated results together with experimental ones are shown in Table II. The simulator in the present study employs the first five conduction bands, as shown in Fig. 1. Figure 2 shows the 298 J. Appl. Phys., Vol. 75, No. 1, 1 January 1994 Kunikiyo et al. Downloaded 08 May 2013 to 128.197.27.9. This article is copyrighted as indicated in the abstract. Reuse of AIP content is subject to the terms at: TABLE I. Local pseudopotential form factors (Ry). Preseot work Cohen and Bergstresser Kf3) us, y,(‘l) -0.2258 0.05698 0.070709 -0.21 O.Q4 0.08 DOS obtained from the energy band structure shown in Fig. 1, which is compared with the one based on the par- abolic and nonparabolic band structures. Nonparabolicity cr=OS eV, longitudinal effective mass m1=0.98q, and transverse effective mass nzt=0.19m0, where m. is the mass of free electron, are employed in the calculation of these analytic band structures. As described in the previous ,section, since the parabolic band structure is appropriate below an energy of about 0.13 eV, the low-energy DOS obtained in the present study is very close to that obtained from the parabolic band. The nonparabolic band represents the realistic DOS in the energy range below about 2 eV. In high energy regimes (above 2 eV), it is clear that the re- alistic DOS cannot be reproduced with the nonparabolic band structure. L r X U,K r Wave Vector FIG. 1. Energy band structure of silicon calculated by an empirical local pseudopotential method, in which the periodic part of the Bloch wave function is expanded with a basis set of reciprocal-lattice vectors. We have employed a set of 113 reciprocal-lattice vectors for the expansion. A phonon dispersion relation is calculated by an adia- b&is bond-charge model, ‘@l’ shown in Fig. 3, where four types of interactions are considered: ( 1) Coulombic inter- action between ions and band charges, (2) nearest- neighbor ion-ion central interaction, (3) ion-bond charge center interaction, and (4) bond charge-bond charge non- central (or bond-bending) interaction of Keating typev2’ There are four types of phonons: longitudinal optical pho- non (LO), t.ransverse optical phonon (TO), longitudinal acoustic phonon (LA), and transverse acoustic phonon (TA). It appears: (1) that there exist six branches, (2) that TA branch and TO branch degenerate along the (lOO> and (111) directions, and (3) that the TA branch away from the F point typically flattens. In the present MC program, once acoustic or optical phonon scattering is cho- sen, a uniform random number determines one of the three phonon branches. modes or three optical modes) phonon of wave vector q, the scattering rate P~$( k,k Aq) is derived from Fermi’s golden rule Upper and lower signs correspond to absorption and emis- sion of a phonon, respectively. The symbols with a prime denote the physical values at the final state. Using the pseudopotential V, ( r ) around the equilibrium positions the electron-phonon interaction Hamilt.onian P’,.- P is written as Electron-phonon scattering rates in the traditional MC simulators have been adjusted to fit a wide range of exper- imental data. This approach is, however, not adequate to simulate hot carrier transport. An electron-phonon scatter- ing rate is calculated using a rigid pseudo-ion model.‘9-22 If an electron of wave vector k in the ltih conduction band with energy ek is scattered with an v-type (three acoustic 6r-----1 I’ ’ ’ i TABLE II. Energies at symmetry points in conduction bands (eV). state I’ISC r&i- s,. Ll<, L3q. Indirect energy gap ??iec Ref. 25. t3e.e Ref. 26 ‘See Ref. 27. Experimental results Present calculation 3.w 3.342 4.15b 4.130 1.131 1.194 2.w 2.099 3.90h 3.906 1.120 1.068 Y. I, Oo 1 I. I. I. EtGrgy &/j 4 FIG. 2. The density-of-states of silicon from the present work (solid curve) and from the previous works {dashed curves) using the parabolic and nonparabolic band structures. Nonparabolicity a=0.5, the longitu- dinal effective mass m,=0.98m,, and the transverse effective mass m,=0.19m0, where m,, is the mass of free electron, are employed in the calculation of these analytic band structures. J. Appl. Php., Vol. 75, No. 1, 1 January 1994 Kunikiyo et a/. 299 Downloaded 08 May 2013 to 128.197.27.9. This article is copyrighted as indicated in the abstract. Reuse of AIP content is subject to the terms at: 60 l.. r X U,K r where V, is crystal volume. Then, A( v’,k, fq,v,~) takes Wave Vector the form FIG. 3. Phonon dispersion relation calculated by the adiabatic bond- charge model. There exist four types of phonons: longitudinal optical phonon (LO), transverse optical phonon (TO), longitudinal acoustic phonon (LA), and transverse acoustic phonoa {TA). TA branch and TO branch degenerate along the ( 100) and (111) directions. The TA branch typically flattens away from the r point. R&,= 7 c P’,[r-RL,+utl,~)] K = 7 F utldgrad ~,O--R& (2) where 1 is the position vector of a unit cell, K is index of atoms constructing a unit cell, and RLK denotes the equi- librium position of the atom. Lf r< is the position vector of the atom which is measured from an origin within a unit . . cell, RI,, IS wntten as R,,,e=l+~, (3) u(l,rz) denotes the displacement of the atom from the equi- librium position which is represented by the Fourier ex- pansion over q: U(k) = c J fi q 2LW&!eJ9 (a,+at_,)e(q,K)eiq’R$ (4) where &fK is the atomic mass, N is the atomic density, aq and aLq are annihilation and generation operator, respec- tively. e(q,Kj denotes the normalized polarization vector for the atoms of type K. Using Eq. (2) and the Bloch theory, we obtain the relation (v’,k&q[grad r/k.(r-Rl,,J [v,k) =e -‘q”(y’,k&qqlgrad V,(r-r,,J jv,k). (5) By using Eqs. (2)) (3 j, and ( 5 j) the matrix element in Eq. ( 1 j is obt.ained (v’,kfq,nq;f 1 j.SY&,Iv,k,nq) = s ~~A(r’,k,aq,w)e( +wz) /zi, 9 (6) where A(v’,k,hq;v,K) = - (v’,k&q IgradVK(r--TK) l~,k)e’q’r~ (7) 300 J. Appl. Phys., Vol. 75, No. 1, 1 January 1994 nq is the equilibrium phonon number from the Bose- Einstein statistics. Owing to the periodic atomic arrange- ment in the crystal, wave function of electron (v,k) and the pseudopotential i/,(r) can be expressed by the Fourier series over reciprocal lattice vector G 1 v,k) =& z CV,k(G)e”k~C)“, (8) V,(r) =: c VK(G)eiG”, (9) G = - f G;, %kkq(G’ &k(G) x~~(G’-G~~)(G’-G~)~-“~‘-~‘.~~, (10) where C? denotes the complex conjugate of C. Deforma- tion potential D( v’,k, =l=q,y) is defined as 1 tv’,k, sz wq r 11 Ziz+, 1 v,k,n,) 1 J- n,+&;. (11) Comparing Eq. (6) with Eq. ( 11 j, the deformat.ion poten- tial takes the form D(v’,k,fq,v) = \hi 1 / K ~~tv’,k,q,v,K),et+q,Kjl, (121 where IIS? is a sum of the atomic mass over a unit cell (e.g., M= &XK). In the case of silicon crystal, there is only one kind of atom in each unit cell, the deformation potential is expressed by Dtv’,k: iiztq,v) = &I ; A(v’,kfq,v,K) e&w)/, (13) =$I , T ,cG, C$k,q(G’jC,ktG) vAG’--G&q) (14) The deformation potential obtained from a rigid pseudo- ion model includes only two kinds of adjustable parame- ters: the normahzed polarization vector for the atoms of type K and pseudopotential VJG). In the present study, e(q,Kj obtained from the bond charge model and P’J Gj obtained from the empirical local pseudopotential method are employed in the calculation of electron-phonon scat- tering rate. Thus, the calculated rate is consistent with both the full band structure and the phonon dispersion relation. A MC simulation using the anisotropic deformation potentials given by Eq. (14) demands the use of huge amount of memory as well as large computational time. Kunikiyo et al, Downloaded 08 May 2013 to 128.197.27.9. This article is copyrighted as indicated in the abstract. Reuse of AIP content is subject to the terms at: FIG. 4. Total phonon sccattering rate calculated by the rigid pseudo-ion model (solid run-a) together with calculated results from conventional work Idztshed curve). It is worth noticing that the total scattering prob- ability is quite similar to the DOS shown in Fig. 2. The DOS is directly reflected in the electron-phonon scattering rates. Drift velocity calculated by using the deformation poten- tials depending on both initial and final wave vectors of an eleotron and those averaged over the equi-energy surface showed no significant difference except near the velocity overshoot regimeBO Even in the velocity overshoot re- gimes, there exist only quite small differences.“” This means that for practical MC simulation, the averaged de- formation potentials can be used. For this reason, defor- mation potentials averaged over equi-energy surface at ini- tial state are employed in the present study. Figure 4 shows total phonon scattering rate obtained in the present study (solid curve in the figure) and that obtained by using the nonparabolic band structure (dashed curve). The total low-energy scattering rate obtained in the present study is very close to the rate used in the previous MC work. It appears that the total scattering rate strongly reflects on the DOS, compared with Fig. 2. The impact-ionization rate is calculated by the use of Fermi’s golden rule from the full band structure. In the impact-ionization process, an electron in the conduction band interacts with an electron in the valence band via the screened Coulomb potent.ial V(r-r’): i Fr(r-r’)z ” 1 ~ 4m(q,ir)) [r-r’] ’ (13 If states 1 (a Bloch state with band index v1 and wave vector kI ) and 2 are the states of the initial electrons in the conduction and valence bands before the transition, respec- tively, and states 1’ and 2’ are those of the final electrons in the conduction band after the transition, the ionization rate liq., ( 1,2-, 1’,2’) is expressed by 1 297 =r r( 1.,2- 1’,2’) =,[ pf$l” pf&- pf,--fb~21 .xS(q+f~-qt--~Z’), (169 where L?(e) is the energy conserving delta function and E l(i- 1,2,1’,2’9 denote the energy of each electron. The direct matrix element ~%f~ and the exchange matrix element M6 are given by ( e2 Ma= 1cldrd&4rd 4dw) Irl-r2] I 7cllh)tj26-2.z) , > (17) ( e2 Mb= $2~2’rlh44r2) 4=(q,o) Irt-cl IWr~9&bd), (18) where c(q,w) is a dielectric function depending on both wave vector and transition energy. The dielectric function is calculated by employing the first four valence bands and the first 11 conduction bandsZ3 Wave vector q and ii@ are interpreted in the context of electron-electron scattering as the momentum and energy exchanged during the transi- tion. ]M,]‘+]Mb]’ in Eq. ( 169 denotes the matrix ele- ment in the case that the spin of electron 1 and 2 is dilfer- ent, while ] &&,-Mb ] ’ denotes the matrix element in the case that the spin of electron 1 and 2 is the same. The wave function of electron &,k is expressed by the Fourier series over reciprocal lattice vector G f&dr) =& $ A,,,~(G)c’(~+~)‘~‘, f where li, is the wave vector associated with energy band V. The wave functions are obtained from the empirical local pseudopotential method. If we represent l/] rI -r2 ] by the Fourier expansion, then 1 -..-2 c ~eiu.w2) h--rKG q cr By substituting Eqs. (199 and (209 into Eqs. (17) and (189: e2 1 Ma=- c ‘ d(LO) l 6 G,G,,G,t ,Gy Ak,,,vl,(Gr,)Ak,,,ti,(G2f) 1 Xh,,v,(GI )&,,(Gd jkl,+G1rkkl-GG112 X6(--kt-Gls+kl+G1-k2t-G2,+k2+G2), (21) e2 1 Mb=- c dq#) l %,.G2,G,& - AiT, ,v2t (Gtr)A&,v,, x ~,,,+,,,‘kl-G1r’ 2 Xc3 -kl,--Glt+kl+G1-kz,-G2,+k2+G2), (22) where S is the wave vector conserving delta function. Prob- ability of transition from the initial state 1 is obtained by Eq. (16) summing over kZ,klf,k2t: &(l)= C ~,,(1,2+ ~2’9. (23) k2Jq’.ky J, Appl. Phys., Vol. 75, No. 1, 1 January 1994 Kunikiyo ef al. 301 Downloaded 08 May 2013 to 128.197.27.9. This article is copyrighted as indicated in the abstract. Reuse of AIP content is subject to the terms at: d Fourth band (4 Energy WI 9 First band - Second band :j Third band +-7-.” Energy of initial electron [ev] 03 FIG. 5. (a) Impact ionization rate obtained in the present study (sym- bols) together with calculated results from conventional work (curves). Dashed curves are from the previous Monte Carlo works. Solid curve is from the reported experimental data obtained by using soft x rays (Ref. 35). The present impact ionization rate for low energy electrons scatter in a rather wide range due to its strong anisotropic nature. For example, for the electron having 2 eV, the scattering rate differs as much as four orders of magnitude. The anisotropy diminishes with increasing the electron energy. (b) Average energy of secondary generated carriers as a function of energy of primary electron at the moment of their generation. Energies of the generated electrons and holes are found to depend linearly on the primary electron energy. Note that the calculated data for holes scatter in a wider range than that of electrons. This is due to the fact that both heavy and light hole bands are involved in impact ionization process. The rate is calculated numerically in such a way as to conserve both momentum and energy of carriers in each transition. The numerical integral is performed by using the tetrahedron method.31-3” Figure 5(a) shows the impact-ionization rate obtained in the present study (sym- bols) together with those used in the previous work (curves). 1011J4 Although the reported impact-ionization rates differ in three orders of magnitudes, they all reported that the simulated impact-ionization coefficient agrees very well with reported experimental data. This may be possible by adjusting electron-phonon scattering rateJ5 so as to con- trol the high energy tail of electron energy distribution. Present results scatter in a rather wide range which implies the strong anisotropy. For example, the ionization rate dif- fers in four orders of magnitude for the electrons having the energy of 2 eV. The anisotropy diminishes with increas- ing the electron energy. Very recently, the electron dynam- ics for electron energies up to 5 eV has been studied by soft x-ray photoemission spectroscopy.s5 The impact-ionization rate obtained in the present study agrees well with their experimental results. This confirms the validity of the physical model we used. The MC simulation program used in the present study takes this anisotropic rate into account to calculate impact-ionization coefficients as well as the quantum yield. Conventional impact-ionization rates increase very rapidly with electron energy above threshold energy. This is due to the assumption usually made in theories of impact-ionization that carriers are ionized as soon as they gain the threshold energy. This is termed a hard threshold impact-ionization model. On the other hand, the ionization probability obtained in the present study rises rather slowly. This is termed a soft threshold, which has been suggested by previous researchers3- in the case of silicon. Figure 5 (b) shows energy of secondary electrons and holes generated after the impact-ionization process as a function of initial electron energy on the event of the transition. It appears that secondary electron energy is in proportion to initial electron energy and is almost independent of wave vector and band index of initial electron before the transi- tion. In the MC simulator, electron energy after the impact ionization process is determined by using this relationship. B. Outline of the Monte Carlo simulator The MC simulator developed in the present study in- cludes the energy bands, the DOS, and six phonon disper- sion relations (three for optical phonon, others for acoustic phonon). In the MC simulator, the first Brillouin zone is discretized with the mesh spacing of 0.05 in units of 2rr/u, where a is the lattice constant, i.e., a= 5.431 A for silicon. Thus, the entire Brillouin zone is discretized by 33861 grid points. At eac.h grid point, the energy E,,(k), gradients dE,,( k)/dli,, corresponding to group velocity, and second derivatives a2E,,( k)/ak,dk,, corresponding to reciprocal ef- fecbive mass tensors, are computed and stored in a look-up table, where k is wave vec.tor of electron, i, j = x,y,z, and the band index 1’ runs over the first five conduction bands. For each cubic cell consisting of eight grid points, the DOS data are stored in a look-up table with the energy spacing of 5 meV. The MC program used in the present study is outlined in Fig. 6. At invocation, the simulator requests applied electric field and temperature, The wave vector of an initial electron having the energy of 3kBT/2 is stochastically cho- sen in such a way as to be consistent with the full band structure, where k, is the Boltzmann constant and T is the absolute temperature. The method is discussed later. The electron is allowed to undergo several scattering events to ensure independence of its initial conditions. The electron is allowed to scatter by randomly choosing one of the scat- tering processes, i.e., acoustic phonon absorption, acoustic phonon emission, optical phonon absorption, optical pho- non emission, impact-ionization, and self-scattering. If phonon scattering or impact-ionization are chosen, the simulator picks up all the candidates of final states in such a way as to conserve both energy and momentum of elec- tron, taking into account both the full band structure and the phonon dispersion relation. The algorithm to find the 302 J. Appt. Phys., Vol. 75, No. 1, 1 January 1994 Kunikiyo et al. Downloaded 08 May 2013 to 128.197.27.9. This article is copyrighted as indicated in the abstract. Reuse of AIP content is subject to the terms at: FIG. 6. How chart of the MC simulator. tinal state is rather complicated. As an example, the method to find a final state after electron-phonon scatter- ing is demonstrated in Fig. 7. As described above, each cubic cell generated in the entire first Brillouin zone has the look-up table involving the DOS data with the energy spac- ing of 5 meV, which is illustrated as a rectangle in Fig. 7(a). First, the simulator searches the cubic cells whose energy ranges within electron energy plus/minus phonon maximum energy, i.e., 65 meV [Fig. 7(a)]. This is a so- called coarse search which reduces the number of candi- dates of final states drastically. Furthermore, this proce- dure can enlrance the computational efficiency by taking advantage of parallel processing capability of a supercom- puter. Then, the h/zC program searches over the chosen cubic cells and finds the energy involved in the phonon scattering by using the phonon dispersion relation. The difference in wave vector between initial electron and the selected cubic cell provides the wave vector of phonon in- volved in the transition [Figs. 7(b), 7(c)]. This is a so- called fine search which picks up all the candidates of the Anal states conserving both the energy and momentum of electron involved in the transition. In each of the selected cubic cells, the DOS on the desired energy band is calcu- lated followed by summing up all the DOS. Then, gener- ating a random number, one of the cubic cells in the first Brillouin zone is chosen stochastically, according to the DOS in each cubic cell [Fig. 7(d)]. The chosen cubic cell is divided into six tetrahedra [Fig. 7(e)]. Then, the third search is performed to find the tetrahedron which inter- sects the equi-energy surface afrer scattering and the DOS of each tetrahedra is calculated. After all the DOS are summed up, one tetrahedron out of six tetrahedra is se- lected again stochast.ically by generating a random number [Fig. 7(f)]. A particular wave vector on the equi-energy surface, which describes the final state, is determined by using two additional random numbers [Fig. 7(g)]. In the case of impact ionization, electron energy in the final state is determined using the relation shown in Fig. 5(b). According to the procedure shown in Figs. 7(c)- 7(g), the wave vector of the final electron is determined stochastically. A free-flight time r after the scattering is given by r= -;log(g), (24) where l? is the maximum total scattering rate in a given energy range and g is a uniform random number ranging from 0 to 1. After each free flight, the wave vector of the electron is updated by eF7- knew=ka-7 3 (25) where F is an applied electric field. Statistics are then col- lected for an appropriate number of scattering events to guarantee convergence of the electron distribution. The CPU time requested For the 10000 electron distribution was about several thousand seconds by the CRAY-YMP supercomputer. Ill. RESULTS AND DISCUSSION In this section, the calculated results under the steady- state condition as well as under the nonequilibrium condi- tion will be shown. All simulated results are related to electron transport in the homogeneous intrinsic bulk sili- con. Under this condition, the electron-impurity scattering and the electron-electron scattering are not significant. Thus, the MC simulator in the present work does not take into account these scattering mechanisms. A. Electron transport in homogeneous intrinsic silicon under the steady-state condition Figure 8 shows the calculated steady-state drift veloc- ity of electrons along the ( 100) and ( 111) crystallographic directions as a function of applied electric field together with the experimental results from Canali et al.“’ Below IO4 V/cm, the drift velocity increases linearly with the electric field. On the other hand, if the field is sufficiently high, the drift velocity saturates. Since high energy elec- trons have larger scattering rate than the cold electrons, high energy electrons are susceptible to frequent scatter- ings. Therefore, the drift velocity deviates from Ohm’s law and ultimately, for electric fields above lo5 V/cm, the sat- uration velocity is reached. As can be seen from the figure, both the calculated electron mobility and the saturation velocity agree well with experimental data. It should be noted that the MC simulator in the present work repro- duces experimental results without adjustable parameters such as deformation potential coefficients or phonon cou- pling constants. Figure 9 shows calculated average electron energy as a function of applied electric field in comparison with the calculated results from the previous MC work.“’ The val- ues obtained in the present study are slightly smaller than those reported in the previous work especially in high- electric field regimes. This is due to the difference in the J. Appl. Phys., Vol. 75, No. 1, 1 January 1994 Kunikiyo et al. 303 Downloaded 08 May 2013 to 128.197.27.9. This article is copyrighted as indicated in the abstract. Reuse of AIP content is subject to the terms at: (4 search for final state in the first Brillouin zone CY selected DOS search direction i=l . . . . . 1 1 i-N cubic number (b) 1 a k 4i qj qN 65 i y&y “‘“71 electron ) ( candidates of phonon wave “am vector ) 4 phonon dispersion curve n selected DOS et601 or E-tie El- calculation of the DOS of selected search direction H cubits i=l . (d) summing u all the DOS o P selected cubits . . I . . . i=N cubic number selection of one final state (one cubic) using a random number (e) chosen cubic cell is divided into 6 tetrahedra . , G & AEHF rb ABDF & ADFH 6,BCDF &CDFH &CGFH select one final state using a random number &&+ 0 : density of state (9) determination of the final state using random numbers case A case B k’=c[(i-5) a+tb] k’=r[(l-$) a+cb] k’=r[(l-5) c+ cd] c 5: random numbers FIG. 7. Schematic illustration accounting for the procedure to determine the final state after the transition such as phonon scattering and impact ionization. Each cubic cell generated in the entire first Brillouin zone has the look-up table involving the DOS data with the energy spacing of 5 meV, which is illustrated as a rectangle in (a). First, the simulator searches the cubic cells whose energy ranges within electron energy plus/minus phonon maximum energy, i.e., 65 meV (a). This is a so-called coarse search which reduces the number of candidates of final states draStiCally. Furthermore, this procedure can enhance the computational efficiency by taking advantage of parallel processing capability of a supercomputer. Then, the MC program searches over the chosen cubic cells and fmd the energy involved in the phonon scattering by the use of the phonon dispersion relation. The difference in the wave vector between initial electron and the selected cubic cell provides the wave vector of phonon involved in the transition (h), (c). This is a so-called fine search which picks up all the candidates of the final states conserving both the energy and momentum of electron involved in the transition. In each of the selected cubic cells, the DOS on the desired energy band is calculated followed by summing up all the DOS. Then, generating a random number, one of the cubic cells in the first Brillouin zone is chosen stochastically, according to the DOS in each cubic cell (d). The chosen cubic cell is divided into six tetrahedra (e). Then, the third search is performed to End the tetrahedron which intersects the q&energy surface after scattering and the DOS of each tetrahedra is calculated. After all the DOS are summed up, one tetrahedron out of six tetrahedra is selected again stochastically by generating a random number (f). A particular wave vector on the equi-energy surface, which describes the final state, is determined by using two additional random numbers (g j . detailed features of the adopted band model in a high elec- tron energy range. Ln the region of field strengths above lo4 V/cm, the optical phonon assisted scattering keeps its pre- dominant role, but is no longer able to fully dissipate the energy gained by the carriers from the field. As a conse- quence, the carrier mean energy is found to increase with electric field. At eatremeIy high field, the average energy seems to increase with slower rate. This is explained by the fact that the high energy electrons are susceptible to more frequent scatterings by impact ionization and lose their energy quite rapidly. A knowledge of the diffusion coefficient is useful for a better understanding of carrier transport phenomena and is necessary for the study of high-frequency device perfor- mance. In order to calculate electron diffusion coefficients, 1 o” ~ / / ..,.. II ,,,-,....,.. ““,‘1 1? ..,..,., I/ .,-’ r ..,..,...,.,: [ Si300K 3 J /.+:-;-o _ -..-. P ,” 6 an - simulation <IO02 .--. simulation <I 11, - Canali et al. ~100, d Canali et al. FIG. 8. Calculated drift velocity as a function of electric field in compar- ison with experimental data. 304 J. Appl. Phys., Vol. 75, No. 1, 1 January 1994 Kunikiyo ef a/. Downloaded 08 May 2013 to 128.197.27.9. This article is copyrighted as indicated in the abstract. Reuse of AIP content is subject to the terms at: 10 1 ----r---n~ ,rn, E _-, . . . Si 300K ? 7 g 100 ..’ 6 i $ 15 lo-’ ?A 2 lj 1v2- < 1 o-$-- A -_I~ simulation ~1 OOz= --- simuiation 4 II > 0 Canali et ai. -400~ a Canali et al. 4 I1 r AI 2 1 .,11/e . . . ...1 103 IO4 io5 lo6 Electric Field [v/cm] FIG. 9. Calculated average electron energy as a function of electric field in com~~riison with calculated results from conventional Monte Carlo work. the square of the second central moment of electron posi- tion in real space is investigated at various applied electric fields. If the direction of the applied electric field is set along t.he x axis in the Cartesian coordinate, second central moments oij (i, j=xg,z) are defined as follows: G-i) =; ]g 6, Si ~1 OO> 1 OOkV/cm (b) Time [ps] FIG. 10. Calculated second central moment in real space a function of time after ia) 1 kV/cm and (b) 100 kV/cm electric fields are applied along the D,,&l> o exp. 4112 (Di! ) A exp. 411> (DL ) -LA2 .-... i.i.i.il_ll_L.. , 1°Y03 lo4 IO5 IO” Electric Field fV/cm] FIG. 11. Calculated diffusion coefficients as a function of applied electric field in comparison with experimental data along the ( 111) direction. At low fields (below 20 H/cm), the diffusion coefficients are constant and isotropic. At high fields, the diffusion coefficients show strong anisotropy. The diffusion process parallel to the electric field differs from that of perpendicular to the elytric field. The longitudinal diffusion coefficient 4, decreases to about 3 of its Ohmic value at around 20 kV/cm. In contrast, the transverse diffusion coefficient D, also decreases, but to a lesser extent than the longitudinal diffusion coefficient. (27) C$j= (rirj> - (rf){rj) (j,j=x,Y,z>, (281 where 6 is the position of Ith electron in real space, N is total number of electrons, ( > denotes mean value. Figure 10 shows calculated second central moments as a function of time. At 1 kV/cm, C& is very close to the transverse moments o$,,&, shown in Fig. 10(a), while it largely deviates from transverse ones at 100 kV/cm, as shown in Fig. lO( b). Other components such as ax,,, aYz, o,, are found to be two orders of magnitude smaller than the lon- gitudinal or transverse ones. Furthermore, those otf- diagonal moments are independent of elapsed time (not shown in the figure). Diffusion coefficients Dij for electrons are defined as foliows: Dij=; ;dj GJ=wv>. Figure 11 shows longitudinal and transverse diffusion co- efficients as a function of applied electric field, comparing with experimental results from Jacoboni ot aL5 In the fig- ure, Dll denotes the longitudinal diffusion coefficient, i.e., Dll = D,,, and DL denotes the transverse diffusion coefi- cients, D, = D,,,, or D,, S At low electric fields, the diffusion coefficient is constant and isotropic. At high electric fields above 10 kV/cm, the diffusion coefficient shows strong an- isotropy; the diffusion process parallel to the electric field differs from that perpendicular to the electric field. The longitudinal diffusion coefficient decreases to about 4 of its Ohmic value at around 20 kV/cm. In contrast, the trans- verse diffusion coefficient also decreases, but to a lesser extent than the longitudinal diffusion coefficient. J. Appl. Phys., Vol. 75, No. 1, 1 January 1094 Kunikiyo et a/. 305 Downloaded 08 May 2013 to 128.197.27.9. This article is copyrighted as indicated in the abstract. Reuse of AIP content is subject to the terms at: Si 300K 1 - MC 0 Overstraeten et al. A Grant et al. AIL-.-‘. 2 3 4 5 0 6 Inverse of electric field [cm/MV] FIG. 12. Calculated impact-ionization coefficient as a function of recip- rocal electric field together with experimental data. Figure 12 shows calculated impact-ionization coeffi- cients (solid line) as a function of the inverse of applied electric field together with experimental results from the previous work.4243 Quantitative agreement between the simulated and experimental results confirms the validity of the newly developed Monte Carlo simulator and physical models that were used. Figure 13 shows calculated quantum yield (symbols) in comparison with experimental results (solid curve). The calculated results agree well with the experimental data.a This also ensures the validity of the impact-ionization model employed in the MC simulator. One of the key issues of hot carrier related degradation is to obtain the electron energy distribution. Figure 14 shows (a) calculated electron energy distribution at the various applied electric fields and (b) the phonon scatter- ing rate as a function of electron energy together with calculated electron energy distribution at 300 and 500 kV/ cm. At low electric fields, the calculated electron distribu- tion shows the Maxwell-Boltzmann distribution. At high electric fields, the distribution deviates from the Maxwell- Boltzmann distribution. It is noted that the distribution at 300 and 500 kV/cm has a shoulder in the energy range from 2.6 to 3.2 eV. This originates from the feature of the phonon scattering rate. In this energy range, the curve of -- Chang et al. :: ’ 0 Simulation / I t I d 1 A O1- 1.5 2’O 2.5 3 I 3-F 1 4 Energy [eVl , FIG. 13. Calculated quantum yield as a function of electron energy to- gether with experimental data. \ ‘,‘: , 10-k ‘i 5 2; i;“- 5 (4 Energy PA FIG. 11. (a) Calculated energy distribution at the various applied electric fields at 300 K and (b) total phonon scattering rate as a function of electron energy (solid curve) together with electron energy distribution at 308 and 500 kV/cm (dashed curve). The distribution at 300 and 500 kV/cm has a shoulder in the energy range from 2.6 to 3.2 eV. This originates from the feature of phonon scattering rate. In this energy range, the curve of the scattering rate has a large dip which reflects the DOS. Electrons outside the energy range are scattered more frequently than those inside the energy range. This is the reason why the energy distri- bution has a shoulder m this energy range. the scattering rate has a large dip which reflects the DOS. Electrons outside the energy range are scattered more fre- quently than those inside the energy range. This is the reason why the energy distribution has a shoulder in this energy range. It is concluded that the phonon scattering rate reflecting the DOS has a large effect on the electron energy distribution. At high electric fields, the impact-ionization process has a large effect on t.he electron energy distribution. In order to investigate the role of impact ‘ionization on the electron energy distribution, the distribution is calculated with and without the impact-ionization process. Figure 15 shows the electron energy distribution calculated with (solid curve) and without (dashed curve) impact- ionization processes at 200 and 500 kV/cm. Without impact-ionization processes, the population of hot carriers is overestimated, especially at 500 kV/cm. In other words, the impact-ionization processes drive electrons into the low energy range. This cooling effect might then affect macro- scopic transport. Figure 16 shows the electron distributions in the mo- mentum space at (a) 10 kV/cm, (b) 100 kV/cm, and (c) 1 MV/cm. At 10 kV/cm, most of the electrons reside in the sixfold equivalent X valleys. At 100 kV/cm, they are 306 J. Appl. Phys., Vol. 75, No. 1, 1 January 1994 Kunikiyo et al. Downloaded 08 May 2013 to 128.197.27.9. This article is copyrighted as indicated in the abstract. Reuse of AIP content is subject to the terms at: 1 0” -..--. j- ... r-‘ ( -,.-...5 . . .. . , I _ with 1.1. T 1 o=.’ ““.““” without 1.1. - L 6 ‘5 l”-pf -+ 0 $7y;&..yml; b 2 ,0-s 1 ‘.. .\,r- . . .: ‘i 7 200 kV/cm-+ : /, t.~ x: I : i 1 ‘O-l,-~-- 1 I I 2 3’4 5 Energy WI B. Electron transport in homogeneous intrinsic silicon during velocity overshoot In this section, special attention is paid to the transient electron transport, such as time dependence of group ve- locity distribution, group velocity distribution, and anisot- ropy of impact-ionization process after high electric field is applied at time 0 like a step function. This condition may be close to the large gradient of the electric field at. around the drain edge of the submicrometer MOSFETs or the base-collector depletion region of the ultrathin base bipolar transistors. Figure 17 shows electron distribution in the momen- tum space at (a) 0.05, (b) 0.15 ps after a 100 kV/cm electric field is applied along the ( 100) direction. The dii- tribution is projected to the plane being normal to the (001) direction. Under the thermal equilibrium condition, most of the electrons reside in six equivalent X valleys. Thus, the distribution of electrons in the momentum space is close to the elliptic distribution. When the electric field is applied along the (100) direction, electrons with a light transverse effective mass are expected to be accelerated quite rapidly. On the other hand, electrons with a heavy longitudinal effective mass are expected to be accelerated slowly. At 0.05 ps, the light mass electrons keeping their elliptic dist.ribution shift against the applied electric field direction, indicating ballistic transport. At 0.15 ps, part of FIG. 15. Electron energy distribution calculated with and without impact-ionization processes at 200 and 500 kV/cm. Without impact ion- i+z..tion processes, the population of hot carriers is overestimated, espe- cially at 500 kV/cm. In other words. the impact-ionization processes drive electrons into the low energy range. This cooling effect might affect the macroscopic transport. still confined in the X valleys, but widely distributed. It is noted that the c.entroid of electrons in each valley shifts slightly against the electric field, which gives rise to a finite drift velocity. At extremely high fields like 1 MV/cm, the electrons are distributed in the entire Brillouin zone, indi- cating a high average electron energy. I 6) I FIG. 16. Electron distribution in momentum space at (a) 10 kV/cm, (b) 100 kY/cm, and (c) 1 MV/cm electric field along the (100) direction at 300 K. At 10 kV/cm, most of the-electrons reside in the sixfold equivalent X valleys. At 100 kV/cm, they are still conlined in the X valleys, but wideiy distributed. Note that the centroid of electrons in each valley shifts slightly against the electric field, which gives rise to a finite drift velocity. At extremely hiih fields like 1 MV/cm, the electrons are distributed in the entire BrilIouin zone, indicating a high average energy. b) I J. Appl. Phys., Vol. 75, No. 1, 1 January 1994 Kunikiyo et al. 307 Downloaded 08 May 2013 to 128.197.27.9. This article is copyrighted as indicated in the abstract. Reuse of AIP content is subject to the terms at: cl oo> @I + c011r W FIELD = 100 kV/cm 0.05 psec (a) FIELD = 100 kV/cm 0.1 psec I 04 FIG. 17. Electron distribution in momentum space at (a) 0.05 and (b) 0.15 ps after 100 kV/cm electric field is applied along the (100) direction at 300 K. The distribution is projected to the plane being normal to the (001) direction. the electron distributions around the (100) axis still re- main elliptic, while others are not elliptic. This is due to the difference in effective mass of electrons. The distribution around the (001) axis reflects a star-shaped equienergy surface due to fourfold rotational symmetry around the (100) axes. If the electric field is applied along the (loo), (01 l), or ( 111) directions, there exist two kinds of electrons with different effective masses. Figure 18 shows schematic illus- trations of shapes of constant energy surfaces in silicon. There are six ellipsoids along the (100) axes with the cen- ters of the ellipsoids located at about 0.85 (2?r/a) from the Brillouin zone center. The shaded regions in the figure correspond to the valleys involving electrons with the heavier effective mass. From now on, we call these elec- trons heavy electrons and others light electrons, for sim- plicity. When the electric field is applied along the (100) direction, heavy electrons are defined in the MC program as follows: lkl ’ I4 (30) l&l ’ IkS~ (31) FIG. 18. Schematic illustration of equi-energy surface at around the first conduction band minima. The shaded regions in the figure correspond to the valleys involving electrons with heavier etfective mass when electric field is applied along (a) (lOO), (b) (Oil), and (c) (111) crystallo- graphic direction. From the view point of the ellipsoidal approximation, the effective masses are equivalent along the (111) direction. This is, however, incorrect for the full band structure. where k,, k,,’ k, indicate the components of the wave vec- tor of electrons. From the view point of the ellipsoidal approximation, the effective masses are equivalent along the (111) direction. It is, however, incorrect for the full band structure. In fact, the curvature of energy band from band minima to X points and that from band minima to l? points are different, which is ignored in the parabolic or nonparabolic band models. Figure 19 shows drift velocity as a function of time along the (a) the (lOO), (b) (Oil), and (c) ( 111) directions after applying 500 kV/cm electric field at r=O. Initial distribution of electrons is assumed to be the thermal equilibrium Maxwellian distribution in mo- mentum space. In the figure, the solid curve corresponds to the drift velocity of total electrons, while symbols corre- spond to drift velocity of light electrons (open circles) and heavy electrons [solid circles), respectively. A key feature of high-electric-field transient transport is velocity over- shoot. The very first part of the transient velocity curve indicates ballistic transport, which is caused by the accel- eration of electrons due to the electric field. In this regime, the drift velocity increases linearly with elapsed time after applying the electric tield. After the ballistic regime, large velocity overshoot is observed. After the velocity overshoot regime, the drift velocity decreases and reaches the steady- state value. It appears that the velocity overshoot shows strong anisotropy. For example, the transient drift velocity along the (100) direction is much larger than for that along the (011) and (111) direction. It is clear that light electrons mainly contribute to velocity overshoot along the (100) and (111) directions, while heavy electrons mainly contribute to velocity overshoot along the (011) direction. 308 J. Appl. Phys., Vol. 75, No. 1, 1 January 1994 Kunikiyo et a/. Downloaded 08 May 2013 to 128.197.27.9. This article is copyrighted as indicated in the abstract. Reuse of AIP content is subject to the terms at: Si CEI 00~ 5bOkVlcm _ total electrons 9 light electrons 0 0.1 0.2 0.3 (4 Time [ps] 8 8 -........... ^^ .,....I,-........- - -........... ^^ .,....I,-........- z 7- z 7- Si ~011 r 500kVlcm -2 v) 26 E6 -- total electrons -- total electrons pg 5- . . . c light electrons c light electrons - heavy electrons ; i”$ 2 I $2 27 “9; &+ f E I” t l..-.---. 4 1 ‘‘Y~L~~; I 0 0 0.1 0.1 (h! Time [pzi2 0.3 (h! Time [pzi2 0.3 - total electrons Time [ps] FIG. 19. Calculated drift velocity aa a function of time after a 500 kV/cm electric field is applied along the (.a) flOO), (b) (c)11), and (c) (111) direction. The curve in the figure corresponds to drift velocity of total electrons. Open circles and solid circles correspond to drift velocity of electrons involved in the unshaded region and shaded region shown in Fig, 18, respectively. Along the (100) and (111) directions, electrons involved in the unshaded region (lit electrons) mainly contribute to velocity overshoot. Along the (011) direction, however, electrons in- volved in the shaded region (heavy electrons) mainly contribute to ve- locity overshoot. The reason is that the ellipsoidal approximation fails along this direction at high electric field. The reason is t.hat the ellipsoidal approximation fails along this direction at high electric field. In other words, average effective mass of electrons involved in the shaded region shown in Fig. 18(b) is lighter than that of electrons in- volved in the unshaded region at high electric field. Along the ( 100) direction, the curve of the drift velocity of heavy electrons has two peaks during the velocity overshoot IFig. 19(a)]. The drift velocity of light electron along the ( 111) direction also has the same characteristics [Fig. 19(c)]. These characteristics are attributed to the energy band structure of silicon. Velocity overshoot has been observed in the form of the high transconductance, i.e., 590 &‘~rn at room tem- perature, measured in silicon n-channel MOSFETS?~ The measured transconductance of the devices increased as the channel length of the devices scaled down to 0.07 ym.45 The increase of transconductance was pronounced at liquid-nitrogen temperature in comparison with that at room temperature, The distance where velocity overshoot is observed is about 0.02 pm from Fig. 19. If device dimen- sion comes to this range, the anisotropy of velocity over- shoot might be observed in the form of anisotropic transconductances measured in the devices having the channel along the different crystallographic directions. Velocity overshoot has been also observed in the form of the cutoff frequency or the signal delay through the collector depletion region in bipolar transistors6 The sig- nal delay through the collector depletion region in ultra- thin base (0.025 pm width) silicon bipolar transistors was simulated by a Monte Carlo method.47 The Monte Carlo simulation makes it clear that velocity overshoot, which occurs at the base-collector junctions of the devices, re- duces the signal delay through the collector depletion re- gion. The distance where velocity overshoot is observed is about 0.02 pm from Fig. 19. If base width comes to this range, the anisotropy of velocity overshoot might be ob- served in the form of the anisotropic collector signal delay measured in the devices having the base-collector along the different. crystallographic directions. In order to investigate the velocity overshoot in detail, the distribution of group velocity is calculated, Figure 20 shows the distributions of group velocity at (a) 0.025, (b) 0.1, and (c) 0.3 ps after a 500 kV/cm electric field is applied along the (100) direction. In this figure, the solid curve indicates the distribution of total electrons. Open circles and solid circles indicate the distribution of light and heavy electrons, respectively. The negative region of the abscissa corresponds to the same direction as the ap- plied electric field. In other words, the positive region cor- responds to the direction along which electrons progress. At 0.025 ps, the right side peak of the group velocity dis- tribution is due to light electrons, indicating ballistic trans- port. These light electrons, so called lucky electrons, mainly contribute to velocity overshoot. Solid circles rep- resenting heavy electrons move against (100) direction. The magnitude of the shift is very small. Figure 20(a) demonstrates that a drifted Maxwellian distribution, a ba- sic assumption of the hydrodynamic model, is not appro- priate for describing the velocity overshoot regime. At 0.1 ps, the right side peak (due to light electrons) disappears due to phonon scattering while the right side peak (due to heavy electrons) appears, also indicating ballistic trans- port. The difference of the time when the right side peak appears originates from the difference in effec.tive mass of electrons. At 0.3 ps, the distribution is close to the Maxwell-Boltzmann distribution, indicating the steady- state condition. J. Appl. Phys., Vol. 75, No, 1, 1 January 1994 Kunikiyo et a/. 309 Downloaded 08 May 2013 to 128.197.27.9. This article is copyrighted as indicated in the abstract. Reuse of AIP content is subject to the terms at: 11 I ’ 1 ’ 500kVhm 1 ~1 OO> (a) Group Velocity [xl O’cmlsecl 04 Group Velocity [xl O’cmlsec] 10” r , 1 , ,~~ ,~~~~i~~~~ 500kVlcm 4 001 0.3oops I FIG. 20. Calculated group velocity distribution at (a) 0.025, (b) 0.1, and (c) 0.3 ps after a SC0 kV/cm electric geld is applied along the (100) direction. The curve in the figure corresponds to the group velocity dis- tribution of total electrons. Open circles and solid circles correspond to the group velocity distribution of light electrons and heavy electrons, respectively. Light electrons mainly contribute to velocity overshoot. If an electric field is applied to different crystallo- graphic directions, different impact-ionization characteris- tics are observed. This phenomenon was first reported by Takagi et aL3 with the use of short channel MOSFETs having the channel along the (100) and (011) direction. They reported that the observed substrate current is max- imum when the drain current flows towards the (100) direction. The ratio of the substrate current induced by the (100) drain current to that induced by the (011) drain current was larger at 81 K than at 300 K. Transient impact-ionization characteristics in the present study also support their results. The substrate current originates from the secondary generated hole due to the impact-ionization Time [ps] ?I ,,: ,’ -2 ,/ ,’ ~ - 2 : : i’ c.’ --- I 0 .:2’o.1 0.2 0 04 Time [ps] L .3 G _ -2 8 ‘; - -2 IO- t ‘E - g - 8 o. - 300K 1 W Time [ps] FIG. 21. A number of impact-ionization processes as a function of time after a 500 kV/cm electric field are applied along the (lOO), (Oil), and (11 I) directions at (a) 300 and (b) 77 K. (c) shows the anisotropy ratio of the impact-ionization process. process around the drain region. The primary electrons undergo the transient transport around the drain region because of the large gradient of applied electric field. Their observation indicates that the primary electrons undergo- ing the transient transport cause the anisotropic impact ionization. Figure 21 shows the number of impact- ionization processes as a function of time after 500 kV/cm electric field is applied at t=O along the (lOO), (01 l), and { 111) direction at (a) 300 K, and (b) 77 K. There exists anisotropy during the transient time, i.e., velocity over- shoot regime, while the anisotropy disappears after that time. The impact-ionization process is more frequent along the ( 100) direction than along the (011) direction and is pronounced at 77 K. The anisotropic impact-ionization 310 J, Appt. Phys., Vol. 75, No. 1, 1 January 1994 Kunikiyo et a/. Downloaded 08 May 2013 to 128.197.27.9. This article is copyrighted as indicated in the abstract. Reuse of AIP content is subject to the terms at: 0 t j 0:1 L -atF=-=’ t3 Time [ps] FIG. 22. The transient average electron energy as a function of elapsed time after a 51x1 kV/cm electric field is applied along the various direc- ti0il.S. process during the transit time is due to the anisotropic impact-ionization rate as well as the difference of average electron energy along the crystallographic directions, which is shown in Fig. 22, From this figure, there is no significant difference between average energy along the (100) and (011) direction. It is concluded that the differ- ence of the impact-ionization rate between the (100) and (011) directions contributes mainly to the anisotropic impact-ionization process during the transient time. Disappearance of the anisotropic impact-ionization process under the steady-state condition seems to contra- dict the anisotropy of ionized probabilities. In order to investigate this phenomenon in detail, the distribution of the electrons undergoing impact ionization in the momen- tum space is calculated. Figure 23 shows two snap shots of the electron distribution undergoing impact ionization at (a) 0.06 and (b) 0.25 ps after 500 kV/cm electric field is applied at time 0 ps along the (100) direction at 300 K. In the figurr, the distribution is projected to the plane being normal to t.he (001) direction. Open circles, crosses, and t.riangles indicate the electrons involved in the first, second, and third conduction band, respectively. At 0.06 ps, elec- trons shift against the electric field. The shift of the elliptic distribution originating from the light electrons implies ballistic transport. In other words, electrons undergoing ballistic transport rapidly gain the threshold energy of the impact ionization. At 0.25 ps, electrons spread over the entire Brillouin zone except for around K points. It is clear that electrons involved in the second conduction band mainly undergo impact ionization. It is concluded that an- isotropy of ionization process can be visible only during the transient time since electrons are distributed locally against the electric field in the Brillouin zone. Under steady-state conditions, electrons spread over the entire Brillouin zone. This is the reason why the anisotropy of the impact- ionization coefficient disappears in spite of the anisotropy of the impact-ionization probabilities. IV. CONCLUSIONS The MC simulator has been developed to simulate electron transport in homogeneous intrinsic silicon at high 04 I FIG. 23. Distribution of electron related to impact-ionization processes in momentum space at (a) 0.06 and [b) 0.25 ps after a 500 kv/cm electric field is applied along the (IO@ direction. Open circles, crosses, and tri- angles denote electrons in the first, second, and third conduction band undergoing the impact-ionization process, respectively. At 0.06 ps, so- called lucky electrons, involved in the first conduction band, mainly un- dergo the impact-ionization process. Electrons exist locally along the elec- tric field direction. At 0.25 ps, electrons spread over the entire Brillouin zone and those involved in the second conduction band mainly undergo the impact-ionization process. This is the reason why the anisotropy of impact ionization can be visible only during the transient time. electric fields. The key features of the program include the following: ( 1) The energy band structure is computed by using a local pseudopotential method. (2) Phonon disper- sion relations are obtained from an adiabatic bond-charge model. (3) The electron-phonon scattering rate is com- puted by using a rigid pseudo-ion model. The calculated scattering rate is consistent with the full band structure and the phonon dispersion relation, thus involving no ad- justable parameters such as deformation potential coeffi- cients. (4) The impact-ionization rate is calculated with the use of Fermi’s golden rule including the full band struc- ture as well as the dielectric function depending on both wave vector and transition energy. The impact-ionization rate obtained in the present study shows strong anisotropy, which is ignored by the traditional Keldysh formula. (5) In the program, the tinal state of a scattering electron is determined in such a way as to conserve both energy and J. Appl. Phys., Vol. 75, No. 1, 1 January 1994 Kunikiyo et al. 311 Downloaded 08 May 2013 to 128.197.27.9. This article is copyrighted as indicated in the abstract. Reuse of AIP content is subject to the terms at: momentum in each scattering process. The simulated re- sults, under the steady-state condition, agree well with the experimental data, which supports the models employed in the program. We have also investigat.ed velocity overshoot and the transient impact-ionization process. It appears that veloc- ity overshoot and transient impact ionization show strong anisotropy. If an electric field is applied along the (100) crystallographic direction, the velocity overshoot as well as transient impact ionization becomes the maximum com- pared with the (011) and { 111) direction. The anisotropy of the impact-ionization process disappears under the steady-state condition in spite of anisotropic impact- ionization probabilities. The reason is that electrons spread over the entire Brillouin zone under the steady-state con- dition. ACKNOWLEDGMENTS The authors would like to thank Dr. N. Sano of Nip- pon Telegraph and Telephone Public Corporation (NTT) for his many helpful discussions. Some of the authors (T.K. and M.T.) are grateful to Dr. N. Mori and H. Kubo of Osaka University for their encouragement throughout this study. One of the authors (T.K.) wishes to thank Dr. N. Kotani and Dr. N. Tsubouchi of Mitsubishi Electric Corporation for their encouragement and constant sup- port. One of the authors (M.T.) is indebted to K. Masui, Dr. K. Fujii, T. Hayashi, and H. Kawazoe of Sharp Cor- poration for thier encouragement and constant support. ‘A. Toriumi, M, Yoshimi, M. Iwase, Y. Akiyama, and K. Tan&u&i, IEEE Trans. Electron. Devices ED-34, 1501 (1987). 2J. Bude, N. Sano, and A. Yoshii, Phys. Rev. B 45, 5848 (1992). ‘S. Tagaki and A. Torium, Semicond. Sci. Technol. 7, B601 ( 1992). 4A, L. Lacaita, F. Zappa, S. Bigliardi, and M. Manfredi, IEEE Trans. Electron Devices ED-40, 577 ( 1993 ) . ‘C. Jacoboni, C. Canali, G. Ottaviani, and A. A. Quaranta, Solid-State Electron. 20, 77 (1977). 6C. Jacoboni and L. Reggiani, Rev. Mod. Phys. 55, 645 ( 1983). ‘R. Brunetti, C. Jacoboni, F. Venturi, E. Sangiorgi, and B. Ricco, Solid- State Electron. 32, 1663 ( 1989). ‘F. Venturi, A. Abramo, E. Sangiorgi, J. Higman, C. Fiegna, and B. R&o, IEDhl Tech. Dig. 503 (1991). ‘X. Wang, V. Chandarmouli, C. M. Maziar, and A. F, Tasch, J. Appl. Phys. 73, 3339 (1993). ‘“J. Tang and K. Hess, J. Appl. Phys. 54, 5139 (1983). “M. V. Fischetti and S. E. Laux, Phys. Rev. B 38, 9721 (1988). “C. Fiegna and E. Sangiorgi, IEEE Trans. Electron. Devices ED-40,619 (1993). ‘“L. V. Keldysh, Sov. Phys. JETP 21, 1135 (1965). “T. Thoma, H. Peifer, W. Engl, W. Quade, R Brunetti, and C. Jacoboni, J. Appl Phys. 69, 2300 (1991). “M. L. Cohen and ‘I. K. Bergstresser, Phys. Rev. 141, 789 (1966). ‘eW. Weber, Phys. Rev. B 15, 4789 (1977). 170. H. NieIsen and W. Weber, Comput. Phys. Commun. 18, 101 (1979). IaS Bednarek and U. R&ssler, Phys. Rev. Lett. 48, 1296 (1982). “6. J. Glembocki and F. H. Pollak, Phys. Rev. Lett. 48, 413 ( 1982). 2oP. B. Allen and M. Cardona, Phys. Rev. B 27, 4760 ( 1983). ‘IS. Zoller, S. Gopalan, and M. Cardona, Appt. Phys, Lett. 54, 614 ( 1989); S. Zoller, S. Gopalan, and M. Cardona, J. Appl. Phys. 68, 1682 (1990). 22P. B. Allen and M. Cardona, Phys. Rev. B 23, 1495 ( 1991). “5. P. Walter and M. L. Cohen, Phys. Rev. B 5, 3101 (1972). “T. Nishino, M. Takeda, and Y. Hamakawa, Solid State Commun. 14, 627 (1974). “M. Welkowsky and R. Braunstein, Phys. Rev. B 5, 497 (1972). ZhW. E. Spicer and R. C. Eden, Proceedings of the 9th International Co&vwnce on the Physics of Semiconductors, edited by S. M. Ryvkin (Nauka, Moscow, 1968), Vol. 1, p. 61. 27R. Hulten and N. G. Nilsson, Solid State Commun. 18, 1341 (1976). ‘sH. H. Rosenbrock Comput. J. 3, 175 (1960). “) P. N. Keating, Phys. Rev. B 145, 637 ( 1966). ‘OH. Mizuno, K. Taniguchi, and C. Hamaguchi, Phys. Rev. B 4& 1512 (1993). ” 0. Jepson and 0. K. Anderson, Solid State Commun. 9, 1763 ( 197 1). “G. Lehmann and M. Taut, Phys. Status Sohdi B 54, 469 (1972). 33J. Rath and A. J. Freeman, Phys. Rev. B 11, 2109 (1975). sJP. B. Allen, Phys. Status Solidi B 120, 529 (1983). jsE. Cartier, M. V. Fischetti, E. A. Eklund, and F. R. McFeely, Appl. Phys. Lett. 62, 3339 (1993). 36E. 0. Kane, Phys. Rev. B 159, 624 ( 1967). ‘rN. Sano, T. Aoki, and A. Yoshii, Appl. Phys. L&t. 55, 1418 (1989). ‘“N. Sao, T. Aoki, M. Tomizawa, and A. Yosbii, Phys. Rev. B 41, 12122 (1990). 39N. Sano and A. Yoshii, Phys. Rev. B 45, 4171 (1992). J”J. Bude, K, Hess, and G. J. Iafrate, Phys. Rev. B 45, 10958 (1992). “C. Canali, C. Jacobini, F. Nava, G. Ottavini, and A. A. Quaranta, Phys. Rev. B 12, 2265 (1975). “R. V. Overstraeten and H. De Man, Solid-State Electron. 13, 583 (1970). 43W. N. Grant, Solid-State Electron. 16, 1189 ( 1973). “C. Chang, C. Hu, and R. W. Brodersen, J. Appl. Phys 57, 302 (1985). “G. Sai-Halasz, M. R. Wordeman, S. Rishton, E. Ganin, and D. P. Kern, IEEE Electron Device L&t. EDL-9,464 (1988). 4hY. Yamauchi and T. Ishibashi, IEEE Electron Device Lett. EDL-7,655 (1986). Jr W. Lee, S. E. Laux, M. V. Fischetti, and D. D. Tang, IEDM Tech. Dig. 473 (1989). 312 J. Appl. Phys., Vol. 75, No. 1, 1 January 1994 Kunikiyo et al. Downloaded 08 May 2013 to 128.197.27.9. This article is copyrighted as indicated in the abstract. Reuse of AIP content is subject to the terms at:
188888
https://www.khanacademy.org/math/math2/xe2ae2386aa2e13d6:prob/xe2ae2386aa2e13d6:prob-add-rule/e/adding-probability
Adding probabilities (practice) | Khan Academy Skip to main content If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains .kastatic.org and .kasandbox.org are unblocked. Explore Browse By Standards Explore Khanmigo Math: Pre-K - 8th grade Math: High school & college Math: Multiple grades Math: Illustrative Math-aligned Math: Eureka Math-aligned Math: Get ready courses Test prep Science Economics Reading & language arts Computing Life skills Social studies Partner courses Khan for educators Select a category to view its courses Search AI for Teachers FreeDonateLog inSign up Search for courses, skills, and videos Help us do more We'll get right to the point: we're asking you to help support Khan Academy. We're a nonprofit that relies on support from people like you. If everyone reading this gives $10 monthly, Khan Academy can continue to thrive for years. Please help keep Khan Academy free, for anyone, anywhere forever. Select gift frequency One time Recurring Monthly Yearly Select amount $10 $20 $30 $40 Other Give now By donating, you agree to our terms of service and privacy policy. Skip to lesson content Integrated math 2 Course: Integrated math 2>Unit 13 Lesson 6: Addition rule for probability Probability with Venn diagrams Addition rule for probability Adding probabilities Addition rule for probability (basic) Math> Integrated math 2> Probability> Addition rule for probability © 2025 Khan Academy Terms of usePrivacy PolicyCookie NoticeAccessibility Statement Adding probabilities CCSS.Math: HSS.CP.B.7 Google Classroom Microsoft Teams You might need: Calculator Problem 26‍ customers are eating dinner at a local diner. Of the 26‍ customers, 20‍ order coffee, 8‍ order pie, and 7‍ order coffee and pie. Using this information, answer each of the following questions. Let A‍ be the event that a randomly selected customer orders coffee and B‍ be the event that a randomly selected customer orders pie. What is P(A)‍, the probability that a customer orders coffee? What is P(B)‍, the probability that a customer orders pie? What is P(A and B)‍, the probability that a customer orders coffee and pie? What is P(A or B)‍, the probability that a customer orders coffee or pie? Show Calculator Related content Video 10 minutes 43 seconds 10:43 Addition rule for probability Video 10 minutes 2 seconds 10:02 Probability with Venn diagrams Report a problem Do 4 problems Skip Check Use of cookies Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy Accept All Cookies Strictly Necessary Only Cookies Settings Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. More information Allow All Manage Consent Preferences Strictly Necessary Cookies Always Active Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account. For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session. Functional Cookies [x] Functional Cookies These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences. Targeting Cookies [x] Targeting Cookies These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission. We do not use cookies to serve third party ads on our Service. Performance Cookies [x] Performance Cookies These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates. For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users. We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails. Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Reject All Confirm My Choices Calculator RAD DEG sin⁻¹ sin del ac cos⁻¹ cos()tan⁻¹ tan π ans ln log eˣ EXP +-= 1 2 3 4 5 6 7 8 9 0. ×÷xʸ√
188889
https://www.aqion.de/site/extremely-dilute-acid
Overview Water Sample Carbonate System pH Calculations Acids and Bases Dilution and Mixing Titration Dosage / Addition Mineral Phases Environmental Chem pH of an Extremely Dilute Acid Problem What is the pH of a 10-8 molar HCl solution? Compare the numerical result of aqion with the analytical solution of the corresponding equations. 1. Numerical Solution with aqion We begin with pure water (button H2O), then click on Reac and enter for “HCl” the value 1e-5 mmol/L, as shown in the right screenshot. (Please mind the concentration units: 10-8 mol/L = 10-5 mmol/L.) Start the calculation by clicking on the Start button. The result appears immediately: | | | --- | | | pH = 6.98 | The highly diluted HCl solution (in contrast to pure water) decreases the pH by a tiny amount from 7 to 6.98. 2. Analytical Solution (Theory) Let us abbreviate the total HCl concentration by CT = 10-8 M. The math system involves three unknowns: [H+], [OH-], and [Cl-]. So we need three equations: | | | | --- | (1.1) | strong acid (completely dissociated): | [Cl-] = CT | | (1.2) | charge balance: | [H+] = [OH-] + [Cl-] | | (1.3) | self-ionization of water: | Kw = [H+] [OH-] = 10-14 | Inserting the first two equations into the third yields a quadratic equation in x = [H+]: | | | --- | | (2) | x2 – CT x – Kw = 0 | The (non-negative) solution for x is | | | --- | | (3) | | After inserting the numbers into the equation we get: | | | --- | | (4) | | The negative decadic logarithm of x defines the pH1 | | | --- | | (5) | pH = –lg [H+] = –lg x = 6.978 | This is in perfect agreement with the numerical result above.2 Pitfalls One often makes the mistake of identifying the H+ concentration with that of HCl, that is, setting [H+] = 10-8 M into the last equation: | | | | | --- --- | | (6) | | pH = –lg [H+] = –lg 10-8 = 8 | ⇐ that’s wrong | This is definitely wrong, because an acid (no matter how much you dilute it) cannot have a pH value above 7. Where does the fallacy lie? The answer is simple: We ignored the 10-7 mol/L H+ that comes from the self-ionization of water — see 1.3. To this “background concentration” of 1.0·10-7 M we should add the small amount of 0.1·10-7 M from HCl. Further Reading the difference between weak and dilute acids – see here analytical equations for N-protic acids – see here Remarks & Footnotes The good news: No activity corrections need to be made for these extremely small concentrations. ↩ The program outputs pH values with only two significant digits (which is for almost all practical applications reasonable). Internally, however, aqion works with high numerical precision (e.g. pH = 6.978060). ↩ [last modified: 2023-11-16]
188890
https://brilliant.org/landing/winning-with-probability/paired-value/
Probability and Chance Use probability to make better decisions. Level 1 Introduction to Probability Comparing Likelihoods Multiple Decks Counting Outcomes Possibilities and Probability Level 2 Two Card Totals Two Card Totals Selecting Success Calculating Probability Level 3 Dice, Coins and Decks Dice and Coins Comparing Combinations Stacking the Deck Changing the Odds Level 4 Expected Value Probability and Value Variable Rewards Expected Value Calculating Expected Value Level 5 Expected Value Comparisons Comparing Value Multiple Event Value Multiple Event Comparisons Level 6 Expressing Probability Venn Diagrams AND Probabilities OR Probabilities Level 7 Probability Relationships Outcomes in OR Calculating OR Not Probability Level 8 Conditional Probability Conditional Probability Dependent or Independent Identifying Dependence Level 9 Calculating with Conditionals Two Way Dependence Calculating Conditionals Independent AND Multiplying with Dependence Calculating Expected Value for Multiple Independent Events Learn how to find the expected value across multiple independent events by finding probabilities and then multiplying them by a fixed reward. Common core mapping 7.SP.C.7, 7.SP.C.8 Content tags expected value for multiple events, two decks, independent events, coin flip combinations, multiplying probabilities, multiple outcomes Video · 3 mins Let's calculate the expected value for goals that depend on more than one event. First, let's look at these two decks. Our goal is to draw a star from the white deck and a square from the black deck. What is the probability of meeting this goal? There are two stars in the white deck and one square in the black deck. The total number of possible pairs is six. So, the probability is 26ths. Now what's the expected value? Expected value is the probability of an outcome multiplied by its payout. Here the probability is 26 and the payout is 3 points. So we multiply 2 sixth by 3. That gives us 66 which simplifies to one point. Just like drawing a single card from a deck, the expected value of drawing a card from two decks is the probability times the payout. Let's try another goal with two independent events. Here we need to draw a triangle card and roll a number less than or equal to two. First, let's find the probability of drawing a triangle. There are three cards and only one of them is a triangle. So, the probability is 1 out of three. Next, let's find the probability of rolling a two or less on a six-sided die. Only rolling a one or two meets the goal. So, the probability is 2 out of six. To find the probability of both of these things happening, we multiply their individual probabilities together. 1/3 26 = 21 18. This is the probability of hitting our goal. Now let's find the expected value if the payout is 6 points. We just multiply the probability we found 21 18 by 6. 21 6 gives us 12 18 which is 2/3. When there are many possible outcomes, thinking about the total number of possibilities can be helpful. Let's look at a final example involving a single card draw and a coin flip. There are four possible cards and two sides of the coin. So, there are four 2 or eight total possible outcomes. Our goal is to draw a circle and get heads on the coin flip. Let's see how many ways we can achieve this. Two of the four cards are circles. For each of those two circle cards, we could also get heads. So there are two ways to get our desired outcome out of eight total possibilities. This means our probability is 28. If the payout for hitting this goal is eight points, we can find the expected value by multiplying the probability 28 by 8. This gives us an expected value of two. When there are multiple outcomes, the best way to find expected value is multiplying the probability by the payout. Interactive lesson · 2 mins Put your learning to the test with an interactive lesson More from this course Probability and Chance Use probability to make better decisions.
188891
https://www.mometrix.com/academy/greatest-common-factor/
Skip to content Greatest Common Factor and Least Common Multiple What is the Greatest Common Factor and Least Common Multiple? On this page Hi, and welcome to this video covering the least common multiple and the greatest common factor! As you know, there are times when we have to algebraically “adjust” how a number or an equation appears in order to proceed with our math work. We can use the greatest common factor and the least common multiple to do this. The greatest common factor (GCF) is the largest number that is a factor of two or more numbers, and the least common multiple (LCM) is the smallest number that is a multiple of two or more numbers. Adding Fractions To see how these concepts are useful, let’s look at adding fractions. Before we can add fractions, we have to make sure the denominators are the same by creating an equivalent fraction: (\frac{2}{3}+\frac{1}{6} \rightarrow \frac{2}{3} \times \frac{2}{2})(+\frac{1}{6} \rightarrow \frac{4}{6} +\frac{1}{6}=\frac{5}{6}) In this example, the least common multiple of 3 and 6 must be determined. In other words, “What is the smallest number that both 3 and 6 can divide into evenly?” With a little thought, we realize that 6 is the least common multiple, because 6 divided by 3 is 2 and 6 divided by 6 is 1. The fraction (\frac{2}{3}) is then adjusted to the equivalent fraction (\frac{4}{6}) by multiplying both the numerator and denominator by 2. Now the two fractions with common denominators can be added for a final value of (\frac{5}{6}). Finding the Least Common Multiple In the context of adding or subtracting fractions, the least common multiple is referred to as the least common denominator. In general, you need to determine a number larger than or equal to two or more numbers to find their least common multiple. It is important to note that there is more than one way to determine the least common multiple. One way is to simply list all the multiples of the values in question and select the smallest shared value, as seen here: Least common multiple of 8, 4, 6 (8\rightarrow 8,16,24,32,40,48)(4\rightarrow 4,8,12,16,20,24,28,32)(6\rightarrow 6,12,18,24,30,36) This illustrates that the least common multiple of 8, 4, and 6 is 24 because it is the smallest number that 8, 4, and 6 can all divide into evenly. Another common method involves the prime factorization of each value. Remember, a prime number is only divisible by 1 and itself. Once the prime factors are determined, list the shared factors once, and then multiply them by the other remaining prime factors. The result is the least common multiple: (36=2\times 2\times 3\times 3)(90=2\times 3\times 3\times 5) (\text{LCM}=2\times 3\times 3\times 2\times 5) The least common multiple can also be found by common (or repeated) division. This method is sometimes considered faster and more efficient than listing multiples and finding prime factors. Here is an example of finding the least common multiple of 3, 6, and 9 using this method: Divide the numbers by the factors of any of the three numbers. 6 has a factor of 2, so let’s use 2. You cannot divide 9 and 3 by 2, so we’ll just rewrite 9 and 3 here. Repeat this process until all of the numbers are reduced to 1. Then, multiply all of the factors together to get the least common multiple. | | | | | --- --- | | 2 | 3 | 6 | 9 | | 3 | 3 | 3 | 9 | | 3 | 1 | 1 | 3 | | 1 | 1 | 1 | LCM (=2\times 3\times 3=18) Now that methods for finding least common multiples have been introduced, we’ll need to change our mindset to finding the greatest common factor of two or more numbers. We will be identifying a value smaller than or equal to the numbers being considered. In other words, ask yourself, “What is the largest value that divides both of these numbers?” Understanding this concept is essential for dividing and factoring polynomials. What is the Greatest Common Factor? Prime factorization can also be used to determine the greatest common factor. However, rather than multiplying all the prime factors like we did for the least common multiple, we will multiply only the prime factors that the numbers share. The resulting product is the greatest common factor. Review Let’s wrap up with a couple of true or false review questions: The least common multiple of 45 and 60 is 15. The answer is false. The greatest common factor of 45 and 60 is 15, but the least common multiple is 180. The least common multiple is a number greater than or equal to the numbers being considered. The answer is true. The least common multiple is greater than or equal to the numbers being considered, while the greatest common factor is equal to or less than the numbers being considered. Thanks for watching, and happy studying! Frequently Asked Questions Q How do you find the LCM and GCF? A There are a variety of techniques for finding the LCM and GCF. The two most common strategies involve making a list, or using the prime factorization. For example, the LCM of 5 and 6 can be found by simply listing the multiples of (5) and (6), and then identifying the lowest multiple shared by both numbers. (5, 10, 15, 20, 25, \mathbf{30}, 35…)(6, 12, 18, 24, \mathbf{30}, 36…)(\mathbf{30}) is the LCM. Similarly, the GCF can be found by listing the factors of each number, and then identifying the greatest factor that is shared. For example, the GCF of (40) and (32) can be found by listing the factors of each number. (40): (1, 2, 4, 5, \mathbf8, 10, 20, 40)(32): (1, 2, 4, \mathbf8, 16, 32)(\mathbf8) is the GCF. For larger numbers, it will not be realistic to make a list of factors or multiples to identify the GCF or LCM. For large numbers, it is most efficient to use the prime factorization technique. For example, when finding the LCM, start by finding the prime factorization of each number (this can be done by creating a factor tree). The prime factorization of (20) is (2\times2\times5), and the prime factorization of (32) is (2\times2\times2\times2\times2). Circle the factors that are in common and only count these once. Now multiply all of the factors (remember not to double-count those circled (2)s). This becomes (2\times2\times5\times2\times2\times2), which equals (160). The LCM of (20) and (32) is (160). When finding the GCF, start by listing the prime factorization of each number (this can be done by creating a factor tree). For example, the prime factorization of (45) is (5\times3\times3), and the prime factorization of (120) is (5\times3\times2\times2\times2). Now simply multiply all of the factors that are shared by both numbers. In this case, we would multiply (5\times3) which equals (15). The GCF of (45) and (120) is (15). The prime factorization approach can seem like a fairly lengthy process, but when working with large numbers it is guaranteed to be a time-saver. Q How do you find the GCF? A There are two main strategies for finding the GCF: Listing the factors, or using the prime factorization. The first strategy involves simply listing the factors of each number, and then looking for the greatest factor that is shared by both numbers. For example, if we are looking for the GCF of (36) and (45), we can list the factors of both numbers and identify the largest number in common. (36): (1,2,3,4,6,\mathbf9,12,18,36)(45): (1,3,5,\mathbf9,15,45)The GCF of (36) and (45) is (\mathbf9). Listing the factors of each number and then identifying the largest factor in common works well for small numbers. However, when finding the GCF of very large numbers it is more efficient to use the prime factorization approach. For example, when finding the GCF of (180) and (162), we start by listing the prime factorization of each number (this can be done by creating a factor tree). The prime factorization of (180) is (2\times2\times3\times3\times5), and the prime factorization of (162) is (2\times3\times3\times3\times3). Now look for the factors that are shared by both numbers. In this case, both numbers share one (2), and two (3)s, or (2\times3\times3). The result of (2\times3\times3) is (18), which is the GCF! This strategy is often more efficient when finding the GCF of really large numbers. Q What does GCF mean? A The GCF stands for the “greatest common factor”. The GCF is defined as the largest number that is a factor of two or more numbers. For example, the GCF of (24) and (36) is (12), because the largest factor that is shared by (24) and (36) is (12). (24) and (36) have other factors in common, but (12) is the largest. Q How do you find the lowest common multiple? A There are a variety of techniques for finding the lowest common multiple. Two common approaches are listing the multiples, and using the prime factorization. Listing the multiples is just as it sounds, simply list the multiples of each number, and then look for the lowest multiple shared by both numbers. For example, when finding the lowest common multiple of (3) and (4), list the multiples: (3): (3,6,9,\mathbf{12},15,18…)(4): (4,8,\mathbf{12},16,20…)(\mathbf{12}) is the lowest multiple shared by (3) and (4). Listing the multiples is a great strategy when the numbers are fairly small. When numbers are large, such as (38) and (42), we should use the prime factorization approach. Start by listing the prime factorization of each number (this can be done using a factor tree). (38): (2\times19)(42): (2\times3\times7) Now circle the shared factors (only count these once). Now multiply all of the factors (remember to only count the (2)s once). This becomes (2\times19\times3\times7), which equals (798). The LCM of (38) and (42) is (798). Q How do you take out the LCM? A Taking out the LCM is a helpful skill when adding or subtracting fractions. Determining the lowest common multiple creates a denominator that is the same for both fractions. For example, the common denominator for (\frac{2}{7}+\frac{3}{5}) would be (35), because (35) is the LCM of (7) and (5). The new fractions become (\frac{10}{35}+\frac{21}{35}), which equals (\frac{31}{35}). Greatest Common Factor and Least Common Multiple PDFs Download Your Greatest Common Factor and Least Common Multiple PDF Download Your Greatest Common Factor and Least Common Multiple PDF Greatest Common Factor and Least Common Multiple Practice Questions Question #1: What is the greatest common factor of 16 and 42? Use it to reduce the fraction (\frac{16}{42}). GCF is 8, and we reduce to (\frac{2}{5}). GCF is 1, and we cannot reduce any further. GCF is 4, and we reduce to (\frac{4}{11}). GCF is 2, and we reduce to (\frac{8}{21}). Answer: The correct answer is D: GCF is 2, and we reduce to (\frac{8}{21}). Let’s approach this problem by listing the prime factors of both the numerator and the denominator. (16=2×2×2×2)(42=2×3×7) Here we see that 2 is the only shared factor of 16 and 42 and is therefore their greatest common factor. We can then divide both numbers by 2 to reduce the fraction: (\frac{16\div2}{42\div2}=\frac{8}{21}) Question #2: Find the least common multiple of 2, 6, and 8. 16 18 24 48 Answer: The correct answer is C: 24. For this problem, let’s list the prime factors of each number. (2=2) (note that we could write (2\times1), but 1 is understood, or implied, and usually not necessary to write) (6=2\times3)(8=2\times2\times2) Remember, when calculating the LCM of two or more numbers, we list each prime factor once that is shared by all of the numbers. Since each of our numbers has 2 as a prime factor, our LCM will also have 2 as one of its prime factors. LCM (=2\times) _______. Now from the 6 we have a leftover 3, and from the 8 we have two 2’s remaining. We multiply those in to get LCM (=2\times3\times2\times2=24). Notice that even though 2, 6, and 8 are all factors of 48, the solution is not D, because 48 is not the smallest common multiple. Question #3: List the first several multiples of 3, 5, and 6 to find the least common multiple. LCM is 15 LCM is 30 LCM is 18 LCM is 75 Answer: The correct answer is B: LCM is 30. The first several multiples of 3 are: 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, … The first several multiples of 5 are: 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, … The first several multiples of 6 are: 6, 12, 18, 24, 30, 36, 42, 48, 54, 60, … As we see above, 30 is the first (least) number that 3, 5, and 6 have in common among their multiples, so the least common multiple is 30. Question #4: Courtney has 54 pieces of candy, and Trish has 36. They want to prepare goodie bags of candy for their friend Kim’s birthday party, but each bag needs to have an equal amount of candy. In order to have the most candy in each bag, with Courtney and Trish working separately, how many bags can they make, and how much candy will be in each bag? 10 bags, with 9 pieces of candy in each 9 bags, with 10 pieces of candy in each 15 bags, with 6 pieces of candy in each 5 bags, with 18 pieces of candy in each Answer: The correct answer is D: 5 bags, with 18 pieces of candy in each. To begin, list the prime factors of both 54 and 36: (54=2\times3\times3\times3)(36=2\times2\times3\times3) Notice that they both share a 2 and two 3s. The product of these shared prime factors is (2\times3\times3=18). We now know that the GCF is 18, which means each bag will contain 18 pieces of candy. Courtney’s 54 pieces will make 3 bags, and Trish’s 36 pieces will make 2 bags. Together, they will make 5 bags with 18 pieces of candy in each. Question #5: Sara is buying fruit for an office brunch, and she needs an equal number of apples and bananas. However, the apples are sold in bags of 4 and the bananas are sold in bunches of 6. What is the least number of apples and bananas Sara can buy? 24 apples and 24 bananas 18 apples and 18 bananas 12 apples and 12 bananas 16 apples and 16 bananas Answer: The correct answer is C: 12 apples and 12 bananas. With this problem, we want to know the least common multiple of 4 and 6. Using the prime factors method, we see the following: (4=2\times2)(6=2\times3)LCM (=2\times2\times3=12) Sara will buy three bags of apples and two bunches of bananas in order to have 12 of each fruit. Return to Pre-Algebra Videos 838699249197520269946579 by Mometrix Test Preparation | Last Updated: July 31, 2025 On this page Why you can trust Mometrix Raising test scores for 20 years 150 million test-takers helped Prep for over 1,500 tests 40,000 5-star reviews A+ BBB rating Who we are
188892
https://puzzling.stackexchange.com/questions/110250/paint-eleven-squares
mathematics - Paint Eleven Squares - Puzzling Stack Exchange Join Puzzling By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Puzzling helpchat Puzzling Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Paint Eleven Squares Ask Question Asked 4 years, 4 months ago Modified4 years, 4 months ago Viewed 694 times This question shows research effort; it is useful and clear 7 Save this question. Show activity on this post. I was inspired by this great question: Paint Eight Squares Given a 5×5 5×5 grid of white squares, can you paint 11 of the squares black so that each white square is orthogonally adjacent to exactly two black squares? mathematics combinatorics Share Share a link to this question Copy linkCC BY-SA 4.0 Improve this question Follow Follow this question to receive notifications asked May 26, 2021 at 5:54 Dmitry KamenetskyDmitry Kamenetsky 39.7k 6 6 gold badges 87 87 silver badges 316 316 bronze badges Add a comment| 1 Answer 1 Sorted by: Reset to default This answer is useful 9 Save this answer. Show activity on this post. The answer is yes and here's why: Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications answered May 26, 2021 at 6:06 Deusovi♦Deusovi 155k 16 16 gold badges 556 556 silver badges 648 648 bronze badges 3 1 That is correct! In fact this is the only solution I found and it is pretty.Dmitry Kamenetsky –Dmitry Kamenetsky 2021-05-26 06:11:12 +00:00 Commented May 26, 2021 at 6:11 8 Pfft. That's cheating. "can you paint 11 of the squares black so that each white square is orthogonally adjacent to exactly two black squares?" yes, trivially, if none of the squares are white! :-)msh210 –msh210 2021-05-26 14:59:29 +00:00 Commented May 26, 2021 at 14:59 they are white to Deusovi...Dmitry Kamenetsky –Dmitry Kamenetsky 2021-05-26 15:15:04 +00:00 Commented May 26, 2021 at 15:15 Add a comment| Your Answer Thanks for contributing an answer to Puzzling Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. Use MathJax to format equations. MathJax reference. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions mathematics combinatorics See similar questions with these tags. Featured on Meta Spevacus has joined us as a Community Manager Introducing a new proactive anti-spam measure The USAMTS attracts a lot of cheating attempts Linked 116x6 Minesweeper grid with all threes 14Paint Eight Squares Related 6Pedro's pawn game 3Keep the grid differences below x x 5Number of pairs of adjacent cells that have distinct colors 8Paint 10 cells of a 10x10 grid 2The Big Cube and Squares Puzzle 21Amoebas escaping the prison 14Paint Eight Squares 75x5 binary grid with every 2x2 sub-grid occurring once 10Painting a Checkerboard Hot Network Questions Clinical-tone story about Earth making people violent Any knowledge on biodegradable lubes, greases and degreasers and how they perform long term? Transforming wavefunction from energy basis to annihilation operator basis for quantum harmonic oscillator Why do universities push for high impact journal publications? Can you formalize the definition of infinitely divisible in FOL? An odd question Is it safe to route top layer traces under header pins, SMD IC? How do you emphasize the verb "to be" with do/does? I have a lot of PTO to take, which will make the deadline impossible Is there a specific term to describe someone who is religious but does not necessarily believe everything that their religion teaches, and uses logic? Determine which are P-cores/E-cores (Intel CPU) How to rsync a large file by comparing earlier versions on the sending end? Copy command with cs names Should I let a player go because of their inability to handle setbacks? What is this chess h4 sac known as? Can peaty/boggy/wet/soggy/marshy ground be solid enough to support several tonnes of foot traffic per minute but NOT support a road? Cannot build the font table of Miama via nfssfont.tex What is the meaning of 率 in this report? How long would it take for me to get all the items in Bongo Cat? RTC battery and VCC switching circuit Lingering odor presumably from bad chicken Analog story - nuclear bombs used to neutralize global warming Proof of every Highly Abundant Number greater than 3 is Even What can be said? more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Puzzling Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
188893
https://www.merriam-webster.com/word-of-the-day/incongruous-2019-11-14
Word of the Day incongruous What It Means : lacking congruity: as a : not harmonious : incompatible b : not conforming : disagreeing c : inconsistent within itself d : lacking propriety : unsuitable incongruous in Context The sight of a horse and carriage amongst the cars on the road was a bit incongruous. "The gunplay scene was so incongruous with the rest of the film that one wonders if [director Michael] Engler added the assassination storyline to simply beef up the movie's runtime." — John Vaaler, The Middlebury (Vermont) Campus, 3 Oct. 2019 Build your vocabulary! Get Word of the Day in your inbox every day. Words Named After People Quiz Words Named After People Quiz Test your vocabulary with our 10-question quiz! Pick the best words! Did You Know? Incongruous is a spin-off of its antonym, congruous, which means "in agreement, harmony, or correspondence." Etymologists are in agreement about the origin of both words: they trace to the Latin congruus, from the verb congruere, which means "to come together" or "to agree." The dates of these words' first uses in English match up pretty well, too. Both words are first known to have appeared in English in the early 1580s. Test Your Vocabulary with M-W Quizzes Challenging Words You Should Know Commonly Confused Words Quiz Challenging Standardized Test Words Simplify The Convoluted Expression Quiz Spot Even More Misspelled Words Quiz Famous Novels, Last Lines Quiz Name That Antonym Fill in the blanks to complete an antonym of incongruous that means "harmonious": e_ _ y _ h _ _ c. Podcast Theme music by Joshua Stamper ©2006 New Jerusalem Music/ASCAP More Words of the Day Sep 28 kerfuffle Sep 27 vociferous Sep 26 gesundheit Sep 25 anomaly Sep 24 brandish Sep 23 nonpareil Can you solve 4 words at once? Can you solve 4 words at once? Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free! Games & Quizzes Learn a new word every day. Delivered to your inbox! © 2025 Merriam-Webster, Incorporated
188894
https://www.vedantu.com/jee-main/maths-lamis-theorem
Courses for Kids Free study material Offline Centres Talk to our experts JEE Main Maths Lamis Theorem Lami's Theorem Download PDF Study Material Important Questions Chapter Pages Revision Notes Difference Between Preparation Tips Exam Info Important Dates Eligibility Criteria Application Form Correction Window Exam Centres Admit Card Reservation Criteria Slot Booking Exam Pattern College Predictor Answer Key Result Colleges News Videos FAQs Syllabus Physics Syllabus Mathematics Syllabus Chemistry Syllabus Courses Class 11 JEE Course (2023-25) Class 12 JEE Course (2023-24) JEE Repeater Course (2023-24) Class 8 JEE Foundation Course Class 9 JEE Foundation Course Class 10 JEE Foundation Course JEE Main Coaching Previous Year Question Paper Subject wise question Paper JEE Main Yearwise Question Paper Practice Materials Practice Papers Sample Papers Mock Test Maths Mock Test Physics Mock Test Chemistry Mock Test Question Answers What is Lami's Theorem? Lami's Theorem is related to the magnitudes of concurrent, coplanar, and non-collinear forces that maintain an object in static equilibrium. The Theorem is so useful to analyze most of the mechanical and structural systems as well. The proportionality constant is similar for all the given three forces. Lami's Theorem is applied in a static analysis of structural and mechanical systems. Lami's Theorem is named after Bernard Lamy. (Image Will Be Uploaded Soon) (Image Will Be Uploaded Soon) Statement of Lami's Theorem Lami 's Theorem states, "When 3 forces related to the vector magnitude acting at the point of equilibrium, each force of the system is always proportional to the sine of the angle that lies between the other two forces." By the diagram given above, let us consider the three forces as A, B, C acting on a rigid body/particle making angles α, β, and γ with each other. It is expressed in the mathematical or equation form as, [\frac{A}{sin\alpha }]=[\frac{B}{sin\beta }]=[\frac{C}{sin\gamma}] Lami's Theorem Derivation Now, let us see how to state and prove Lami's Theorem or the Lami's theorem derivation. Let FA, FB, and FC are the forces acting at a common point. As per Lami's theorem statement, we take the sum of all the three forces acting which will be zero at a given point. That is, FA + FB + FC = 0 The angles made by the force vectors when a triangle is drawn given by, (Image Will Be Uploaded Soon) (Image Will Be Uploaded Soon) We write the angles in terms of complementary angles and use the law of triangle of vector addition. Then, by applying the sine rule, we will get, [\frac{A}{sin(180-\alpha)}]=[\frac{B}{sin(180-\beta)}]=[\frac{C}{sin(180-\gamma)}] So, by Lami’s theorem, we have, [\frac{A}{sin\alpha }]=[\frac{B}{sin\beta }]=[\frac{C}{sin\gamma}] Hence, by applying the sine rule to complementary angles, we clearly see that we reach the required result for Lami's Theorem. Now, let us see how Lami's Theorem is useful to determine the magnitude of unknown forces for the given system. Example of Lami's Theorem Now, let us understand Lami's theorem problems and solved examples. Problem Consider an advertisement board that hangs using two strings, making an equal angle with the ceiling. Calculate the tension in this case in both the strings. Solution A similar free-body diagram helps us to resolve the forces first. After resolving the forces, we'll apply the Theorem that we require to get the value of tension in both the strings. Here, the signboard weighs towards the downward direction and another force is the tension generated by the signboard in both strings. Here, in this case, the tension 'T' in both the strings will be similar because the angle made by both strings with the signboard is equal. (Image Will Be Uploaded Soon) (Image Will Be Uploaded Soon) The above image represents the free body diagram of the signboard. Applying the Lami's Theorem we get, [\frac{T}{sin(180-\theta )}]=[\frac{mg}{sin(2\theta )}] Since, the value of sin (180 – θ) = sin θ, and sin (2θ) = 2sinθ cosθ Therefore, we get the final tension force in the string T as given below, [\frac{T}{sin\theta}]=[\frac{mg}{2sin\theta cos\theta }] That is, [T=\frac{mg}{2cos\theta }] The same concept, along with the equations, can apply for a boy who is playing on a swing, and we reach the same result. Limitations of Lami's Theorem The limitations of the Lami's Theorem are pointed below, which must be remembered before application. There should exist only three forces The three forces are to be coplanar (i.e., should be in a single plane) And, the three forces should remain concurrent (their line of action meeting at a point) Those forces should also be non-linear (their line of action should not overlap with each other). Also, this mathematically says that no angles between those three forces should be equal to 180 in degrees. Radially, those three forces should be inward or outward and opposite. Mathematically, the three angles between those three forces should not be higher than 180 in degrees And most important is, those three forces must be in the equilibrium point Applicability of Lami's Theorem This theorem has been obtained from the Sine Rule for triangles. If we represent the forces as lines as in a free-body diagram and translate them in such a way that one head touches the tail of another, we will notice that when there are three forces, if they are supposed to cancel each other, they resultantly form a triangle . If they are not supposed to cancel each other, they form an open curve. The Sine Rule is only applicable for triangles and not for all polygons. Therefore, Lami's Theorem is only applicable to three forces, but not for 'n' number of forces. Lami’s Theorem Lami’s Theorem states that, ‘When three forces acting at a point are in equilibrium, then each force is proportional to the sine of the angle between the other two forces’. Lami’s theorem describes that if three forces acting at a single point are in equilibrium, then each of the forces is directly proportional to the sine of the angle between the remaining two forces. This theorem gives the conditions of equilibrium for three forces acting at a point. Lami’s theorem also defines that if three forces stand in at a position in symmetry, each force is comparable to the sine of the angle between the other two forces. The angle between the force vectors is taken when all the three vectors will emerge from the particle. Derivation of Lami’s Theorem Let P, Q, R be the three concurrent forces in equilibrium as shown in figure Since the forces are vectors, it can be moved to form a triangle as shown in figure (Image Will Be Uploaded Soon) Applying the sine rule, [\frac{p}{(sin(180-\alpha ))}] =[\frac{Q}{(sin(180-\beta ))}] =[\frac{R}{(sin(180-\theta ))}] [\frac{P}{(sin\alpha )}] =[\frac{Q}{(sin\beta )}] =[\frac{R}{(sin\theta )}]. Hence Proved Example of Lami's Theorem Example: Let 45 degrees be the angle made by the strings with the signboard having a mass of 6 kg, then what is the value of the tension T in both the strings? Solution: Given, m = 6 kg, g = 9.8 m/s², 𝜃 = 45 degrees Using the derived formula from example 1, we get, T = [\frac{mg}{2cos\theta }] i.e. T = 6 x 9.8 / 2cos45 = 41.6 N Therefore, the tension in both the strings to hold the signboard exactly horizontal is 41.6 N. Limitations of Lami’s Theorem There are various limitations of Lami’s Theorem: In Lami’s Theorem, there should be three forces only. The three forces should be coplanar, that means they should be in a single plane. The three forces should be concurrent, that means their line of action meeting at a point. The three forces of Lami’s Theorem should be non-linear, which means their line of action does not overlap each other. It means no angles between those three forces should be equal to 180 in degree The three forces of Lami’s Theorem should be radially inward or outward & opposite. It means the 3 angles between those 3 forces should not be greater than 180 degrees. The three forces of Lami’s Theorem must be in equilibrium. Applications of Lami’s Theorem Lami’s theorem has been obtained from the Sine Rule for triangles. By representing the forces as lines as in a free-body diagram and translating them in such a way that one head touches the tail of another, then it will be noticed that when there are three forces, if they are supposed to cancel each other, they resultantly form a triangle. If they are not supposed to cancel each other, they form an open curve. The Sine Rule is only applicable for triangles only and hence Lami's Theorem is only applicable to three forces, but not for the 'n' number of forces. FAQs on Lami's Theorem 1.Define Lami’s Theorem? Lami’s Theorem states that ‘When three forces acting at a point are in equilibrium, then each force is proportional to the sine of the angle between the other two forces’. Lami's Theorem is generally related to the magnitudes of concurrent, coplanar, and non-collinear forces that maintain an object in static equilibrium. This Theorem helps to analyze most of the mechanical and structural systems as well. The proportionality constant is similar for all the given three forces used in Lami’s Theorem. 2.What are the limitations of Lami’s Theorem? There are various limitations of Lami’s Theorem: There should exist only three forces in Lami’s Theorem. The three forces of Lami’s Theorem are to be coplanar. The three forces of Lami’s Theorem should remain concurrent. The three forces of Lami’s Theorem should also be non-linear. 3.What are the applications of Lami’s Theorem? There are various applications of Lami’s Theorem To find the length of sides of right triangles. To determine the sound speed in the water under oceanography. It is implemented in the log calculators. It can be used in Geology, Meteorology and Aerospace. 4.Is Lami’s Theorem applicable for more than three forces? Lami’s theorem is basically obtained from the Sine Rule for triangles. Representing the forces as lines as in free-body diagrams, and translating them in such a manner that the head of one touches the tail of another, it will be noticed that when there are three forces, they cancel each other, they will form a triangle. If it does not cancel each other, it will form an open curve. The Sine Rule is applicable only for triangles and thus, Lami’s theorem is applicable for three forces only and not for any number of forces. 5.From where can students find the study materials for Lami’s Theorem? Students can find everything they need on the Vedantu app or website. All students just have to sign in and then they will be able to download what they want in pdf format. The study materials are created by professionals in the field and the content is accurate and reliable. These study materials are completely free and there are no charges at all. Students can find essays on Kho-Kho and many other topics. 6. Give the applications of Lami's Theorem? A few of Lami's theorem applications can be given as follows. To find the length of sides of a right triangle (a triangle which is of one right-angled corner) It's been incorporated into the log calculators and the operation of about every electronic device on the planet Besides Engineering Calculations, it can also be used in the math for Oceanography, Geology, Meteorology, Aerospace, or anywhere either the Trigonometry or log calculations are incorporated One of the uses in Oceanography is, to determine the sound speed in the water, while sometimes it is also used to calculate the range of a sound source in water 7. Solve the given problem using Lami's Theorem? Problem Statement A baby is playing on a swing, hanging with the help of two identical chains, which are at rest. Identify the forces acting on the baby by applying the Lami's Theorem and find the tension acting on the chain? (Image Will Be Uploaded Soon) (Image Will Be Uploaded Soon) Solution The baby and the chains are both modeled as a particle hung by two identical strings as represented in the figure. Here, three forces are acting on the baby. Downward gravitational force along the negative 'y' direction (mg) Tension (T) exists along the two strings These three forces are coplanar and concurrent as well, as shown in the below-given figure. (Image Will Be Uploaded Soon) (Image Will Be Uploaded Soon) By using Lami’s theorem, [\frac{T}{sin180-\theta}] = [\frac{mg}{sin(2\theta) }] Since, sin (180 – θ) = sin θ, and sin (2θ) = 2sinθ cosθ Therefore, we get the final tension force in the string T as given below, [\frac{T}{sin\theta}] = [\frac{mg}{(2sin\theta cos\theta )}] So, by this, we get the tension on each string as, T = [\frac{mg}{(2cos\theta )}] Recently Updated Pages Trigonometric Ratios of Compound AnglesJEE Main 2024 (January 24 Shift 1) Question Paper with Solutions [PDF]JEE Main 2024 (January 25 Shift 1) Question Paper with Solutions [PDF]JEE Main 2024 (January 24 Shift 2) Question Paper with Solutions [PDF]JEE Main 2024 (January 24 Shift 1) Maths Question Paper with Solutions [PDF]JEE Probability Important Concepts and Tips for Exam Preparation Trigonometric Ratios of Compound AnglesJEE Main 2024 (January 24 Shift 1) Question Paper with Solutions [PDF]JEE Main 2024 (January 25 Shift 1) Question Paper with Solutions [PDF] JEE Main 2024 (January 24 Shift 2) Question Paper with Solutions [PDF]JEE Main 2024 (January 24 Shift 1) Maths Question Paper with Solutions [PDF]JEE Probability Important Concepts and Tips for Exam Preparation Trending topics NIT Cutoff Percentile for 2025JEE Main Syllabus 2026 (Updated)JEE Main Marks vs Rank 2026Top 10 NIT Colleges in India 2025: Rankings, Courses, Eligibility and MoreJEE Main 2026 Syllabus with Weightage PDFJEE Main 2026: Trigonometry Notes - FREE PDF Download NIT Cutoff Percentile for 2025JEE Main Syllabus 2026 (Updated)JEE Main Marks vs Rank 2026 Top 10 NIT Colleges in India 2025: Rankings, Courses, Eligibility and MoreJEE Main 2026 Syllabus with Weightage PDFJEE Main 2026: Trigonometry Notes - FREE PDF Download Other Pages Triangle Law of Vector Addition for JEE Main 2026JEE Main Notes 2026 PDFJEE Mains Sample Paper with Solutions 2026JEE Main Maths Formulas 2026Trigonometry JEE Main Important Questions with Solutions 2026JEE Main Exam Study Materials 2026 Triangle Law of Vector Addition for JEE Main 2026JEE Main Notes 2026 PDFJEE Mains Sample Paper with Solutions 2026 JEE Main Maths Formulas 2026Trigonometry JEE Main Important Questions with Solutions 2026JEE Main Exam Study Materials 2026 JEE students also check JEE Main Overview JEE Main Syllabus JEE Main Question Papers JEE Main Study Material JEE Main Result JEE Main Cutoff JEE Main FAQ JEE Main News JEE Advanced Overview JEE Advanced Syllabus JEE Advanced Question Papers JEE Advanced Study Material JEE Advanced Result JEE Advanced Cutoff JEE Advanced FAQ JEE Main & Advanced IIT Colleges
188895
https://villains.fandom.com/wiki/Emmon_Frey
Villains Wiki Hi. This is Thesecret1070. I am an admin of this site. Edit as much as you wish, but one little thing... If you are going to edit a lot, then make yourself a user and login. Other than that, enjoy Villains Wiki!!! READ MORE Emmon Frey | | | --- | | | This article's content is marked as Mature The page contains mature content that may include coarse language, sexual references, strong drug use, extremely traumatic themes, and/or graphic violent images which may be disturbing to some. Mature pages are recommended for those who are 18 years of age and older. If you are 18 years or older or are comfortable with graphic material, you are free to view this page. Otherwise, you should close this page and view another page. Note: Content classification services hold no influence over the template's criteria and usage. Only the content itself matters. | If you are 18 years or older or are comfortable with graphic material, you are free to view this page. Otherwise, you should close this page and view another page. Note: Content classification services hold no influence over the template's criteria and usage. Only the content itself matters. Evil-doer Full Name Emmon Frey Alias Emm (by Genna) Lord Emmon Ser Emmon Origin A Song of Ice and Fire Occupation Lord of Riverrun Head of House Frey of Riverrun Knight of the houses Frey and Lannister Powers / Skills Manipulation Jousting Combat (presumably; at least basic skills) Vast wealth and resources Connection with House Lannister and Casterly Rock's power Goals Take Riverrun from the rebels (succeeded). Execute Edmure Tully to secure his claim on Riverrun (failed). Serve House Lannister and the Iron Throne. Find the Blackfish. Get rid of the brotherhood without banners (all ongoing). Crimes Aiding and abetting Invasion Attempted hostage killing Attempted wrongful execution Type of Villain Insecure Coward Evil-doer Full Name Alias Lord Emmon Ser Emmon Origin Occupation Head of House Frey of Riverrun Knight of the houses Frey and Lannister Powers / Skills Jousting Combat (presumably; at least basic skills) Vast wealth and resources Connection with House Lannister and Casterly Rock's power Goals Execute Edmure Tully to secure his claim on Riverrun (failed). Serve House Lannister and the Iron Throne. Find the Blackfish. Get rid of the brotherhood without banners (all ongoing). Crimes Invasion Attempted hostage killing Attempted wrongful execution Type of Villain | | | | --- | “ | For a son to raise his hand against a father. Monstrous. These are dark days in Westeros. I fear for us all with Lord Tywin gone. | „ | | | ~ Emmon Frey | | Lord Emmon Frey, initially introduced as Ser Emmon Frey, is a minor antagonist in the A Song of Ice and Fire novel series. He is a knight, and currently lord, of House Frey, being the second son of Lord Walder Frey and his first wife, Perra Royce. He is married to Genna Lannister and the two have have four sons—Cleos, Lyonel, Tion, and Walder—and two grandsons, Tywin and Willem. Currently, Emmon is the Lord of Riverrun since the aftermath of the Red Wedding. Contents Appearance[] Emmon is described as a thin and mostly bald man. He has a prominent apple in his throat. Personality[] Emmon Frey is sullen and nervous in personality. Because he is incompetent, his wife Gemma considers him a fool. Biography[] Emmon is born as the second son of Walder Frey and his first wife, Perra Royce. He was married to Lord Tywin Lannister's sister, Genna Lannister, when she was 7 and he was 14. Tywin, though 10, was the only one who spoke against the match when it was declared by his father, Tytos Lannister. Emmon and Gemma have four children, Cleos, Lyonel, Tion, and Walder, and through their eldest son Ser Cleos Frey, two grandsons, Tywin and Willem. This Frey branch's residence has been at Casterly Rock since Emmon wed Genna. Ser Emmon attended the Hand's Tourney in King's Landing with five of his brothers. When the War of the Five Kings begins, the Freys are persuaded to join Robb Stark. However, Emmon takes the Lannisters' side due to his marriage. Unlike the rest of House Frey, Ser Emmon Frey and his children and grandchildren chose Genna Lannister's allegiance over that of Lord Walder, and thus fought for House Lannister since the beginning of the War of the Five Kings, without ever siding with House Stark or House Tully. During the war, Cleos Frey, while accompanying his cousin Jaime Lannister and Brienne of Tarth, is killed in a fight with outlaws on the road between Maidenpool and Duskendale. Shortly after, their son Tion is murdered by Lord Rickard Karstark at Riverrun, along with one of Ser Kevan Lannister's sons, Willem. The Freys, with the aid of Roose Bolton, treacherously betray Robb at the Red Wedding, between Robb's uncle Edmure Tully, Lord of Riverrun and Lord Paramount of the Trident, and one of Emmon's half-sisters, Roslin Frey. Robb, his mother, Catelyn Stark (née Tully), and most of their troops are murdered. Many nobles, including Edmure, are captured. For their rebellion against the Iron Throne, the Tullys are stripped of Riverrun and all its lands and incomes, which are granted to Lord Emmon Frey for his loyalty and service to the Lannisters and the Iron Throne, creating House Frey of Riverrun. Lord Edmure Tully has been attainted by the crown. Although Emmon serves as Lord of Riverrun, the castle of Harrenhal has now replaced Riverrun as the capital of the Riverlands, and the position of Lord Paramount of the Trident (ruler of the Riverlands) belongs to the Lord of Harrenhal (Petyr Baelish) since the aftermath of the Battle of the Blackwater. Walder Frey's grandson and heir, Ser Ryman Frey, leads a Frey force of 2000 that besieges Riverrun, which is being held by Edmure's uncle, Ser Brynden Tully, the Blackfish. They are assisted by the new Warden of the West, Ser Daven Lannister, and many of the River Lords are forced to participate in the siege, though unwillingly. Jaime Lannister is sent to the Riverlands to end the siege of Riverrun. He finds the siege being poorly handled, with Ryman Frey threatening to hang Edmure each day but never going through with it as he doesn't want to lose a hostage. Emmon thinks he's been made Lord Paramount of the Trident, but Jaime tells him this title has gone to Petyr Baelish, the new Lord of Harrenhal. Emmon is worried about Riverrun being attacked, as he doesn't want his new castle damaged. He continually reminds people of his new title, showing his letter from King Tommen Baratheon. His wife finally tells him everybody knows about the King's decree. Jaime persuades Edmure to tell his uncle to surrender Riverrun, threatening to destroy Riverrun and the Tullys if Edmure refuses and offering to treat him well at Casterly Rock, sending his pregnant wife Roslin with him. However, Edmure delays, and by the time Riverrun is surrendered, Brynden has escaped by swimming under a gate and down the river. Emmon is furious, threatening to have Edmure beheaded for this. Jaime sends Edmure back to Casterly Rock. Emmon is left to hold Riverrun with a 200-man garrison, though he is worried Brynden may try to retake Riverrun. He calls together all the people of Riverrun and gives a long speech about how he will be their new Lord. Unknown to him, the minstrel in the camp who is now staying at Riverrun, Tom Sevenstrings, is a member of the Brotherhood without Banners. They are currently being led by Lady Stoneheart, whose intention is to kill anybody connected to the Red Wedding, such as the Freys. Gallery[] Trivia[] External Link[] Navigation[] | | | | --- | Villains | | | | Westeros Beyond the Wall The Others Night King | White Walker Commander | Wights (Chett & Viserion) Giants Mag Mar Tun Doh Weg | Dongo Free Folk Mance Rayder | Tormund Giantsbane | Styr | Rattleshirt | Orell | Craster | Varamyr Sixskins | Harma Dogshead | The Weeper | Alfyn Crowkiller | Ygritte | Hali | Sylas the Grim The Wall Night's Watch Night's King | Rat Cook | Mad Axe | Olyver Bracken | Raymund Mallery | George Graceford | Perkin the Flea | Alliser Thorne | Bowen Marsh | Othell Yarwyck | Janos Slynt | Olly | Karl Tanner | Dirk | Ollo Lophand | Clubfoot Karl | Chett | Rast | Brant | Derek | Stiv | Wallen | Rorge | Biter | Allar Deem Others The Thing that Came in the Night The North House Stark Theon Stark | Cregan Stark | Arya Stark | Tom House Bolton Royce IV Bolton | Roose Bolton | Ramsay Bolton | Locke | Reek | Myranda | Violet | Little Walder Frey | Big Walder Frey | Bastard's Boys | Master torturer | Smalljon Umber House Karstark Rickard Karstark | Arnolf Karstark | Cregan Karstark | Arthor Karstark | Harald Karstark Others Bowen Marsh | Jorah Mormont | Olly The Vale of Arryn House Arryn Lysa Arryn | Marillion | Mord | Mandon Moore House Baelish Petyr Baelish | Oswell Kettleblack Vale Mountain Clans Shagga Others Lyn Corbray | Rast Riverlands House Baelish of Harrenhal Petyr Baelish House Frey Walder Frey | Emmon Frey | Aenys Frey | Walder Rivers | Jared Frey | Hosteen Frey | Symond Frey | Merrett Frey | Raymund Frey | Lothar Frey | Whalen Frey | Benfrey Frey | Ryman Frey | Rhaegar Frey | Big Walder Frey | Little Walder Frey | Edwyn Frey | Black Walder Frey | Tytos Frey | Leslyn Haigh | Harys Haigh House Strong Larys Strong | Alys Rivers | Larys Strong's Prisoners House Bracken Lothar Bracken | Olyver Bracken | Aegor Rivers House Blackwood Samwell Blackwood | Willem Blackwood Brotherhood Without Banners Lady Stoneheart | Lem Lemoncloak | Tom of Sevenstreams | Morgan | Gatins Others Chett | Garse Goodbrook | Lysa Tully | Tommard Heddle | Danelle Lothston | Harren the Red | Gargon Qoherys | Raymund Mallery Iron Islands House Greyjoy Dalton Greyjoy | Dagon Greyjoy | Balon IX Greyjoy | Euron III Greyjoy | Victarion Greyjoy | Aeron Greyjoy | Asha Greyjoy | Yara Greyjoy | Theon Greyjoy | Dagmer Cleftjaw | Black Lorren | Stygg | Drennan | Adrack Humble | Red Oarsman | Lucas Codd | Torwold Browntooth | Harrag | Iron Fleet House Hoare Qhored I Hoare | Hagon Hoare | Harwyn Hoare | Harren Hoare Others Joron I Blacktyde | Urrathon IV Goodbrother | Urron Greyiron | The Shrike Westerlands House Lannister Lann the Clever | Jason Lannister | Johanna Lannister | Tywin Lannister | Cersei Lannister | Jaime Lannister | Tyrion Lannister | Lancel Lannister | Amory Lorch | Ilyn Payne | Preston Greenfield | Shae | Lowell House Clegane Gregor Clegane | Sandor Clegane | Polliver | Rafford | The Tickler | Weasel | Mountain's Men House Reyne Ellyn Reyne | Roger Reyne House Spicer Rolph Spicer | Sybell Spicer Others Alfred Broome | Androw Farman | Othell Yarwyck Crownlands Faith of the Seven High Sparrow | The Shepherd | Septon Bernard | Baelor I Targaryen | Septa Unella | Faith Militant | Lancel Lannister House Targaryen Aegon I Targaryen | Visenya Targaryen | Rhaenys Targaryen | Maegor I Targaryen | Rhaenys Targaryen | Daemon Targaryen | Rhaenyra Targaryen | Aegon II Targaryen | Aemond Targaryen | Daeron Targaryen | Daeron I Targaryen | Baelor I Targaryen | Aegon IV Targaryen | Aerion Targaryen | Aerys II Targaryen | Rhaegar Targaryen | Viserys Targaryen | Daenerys I Targaryen House Baratheon of King's Landing Robert I Baratheon | Joffrey I Baratheon | Cersei Lannister | Janos Slynt | Ilyn Payne | Bronn | Sandor Clegane | Kettleblack Brothers | Catspaw House Baratheon of Dragonstone Stannis Baratheon | Selyse Florent | Melisandre | Axell Florent | Richard Horpe | Clayton Suggs | Salladhor Saan | Shadow Assassins House Blackfyre Daemon I Blackfyre | Daemon II Blackfyre | Haegon I Blackfyre | Aegor Rivers | Alyn Cockshaw | Golden Company Sworn Brotherhood of the Kingsguard Olyver Bracken | Raymund Mallery | Criston Cole | Marston Waters | Amaury Peake | Mervyn Flowers | Jaime Lannister | Boros Blount | Meryn Trant | Mandon Moore | Preston Greenfield | Sandor Clegane | Osmund Kettleblack | Loras Tyrell | Robert Strong Greens Aegon II Targaryen | Alicent Hightower | Otto Hightower | Aemond Targaryen | Daeron Targaryen | Criston Cole | Borros Baratheon | Ormund Hightower | Jason Lannister | Unwin Peake | George Graceford | Jon Roxton | Larys Strong | Larys Strong's prisoners | Alys Rivers | Hugh Hammer | Ulf White | Hobert Hightower | Alfred Broome | Arryk Cargyll | Marston Waters | Perkin the Flea | Luthor Largent | Caltrops Blacks Rhaenyra Targaryen | Daemon Targaryen | Rhaenys Targaryen | Cregan Stark | Mysaria | Hugh Hammer | Ulf White | Luthor Largent | Bartimos Celtigar | Alfred Broome | Dalton Greyjoy | Samwell Blackwood | Willem Blackwood | Blood and Cheese House Kettleblack Oswell Kettleblack | Osmund Kettleblack | Osfryd Kettleblack | Osney Kettleblack City Watch of King's Landing Daemon Targaryen | Janos Slynt | Allar Deem | Bronn | Osfryd Kettleblack | Blood | Luthor Largent | Perkin the Flea Alchemists' Guild Rossart | Garigus | Belis Kingswood Brotherhood Simon Toyne | Smiling Knight | Wenda the White Fawn Others Alliser Thorne | Rorge | Biter | Arryk Cargyll | Bartimos Celtigar | Denys Darklyn | Hugh Hammer | Ulf White | Marston Waters | Qarl Correy | Karl Tanner | Olyvar | King's Landing Rioters Stormlands House Baratheon Orys Baratheon | Borys Baratheon | Borros Baratheon | Robert I Baratheon | Stannis Baratheon | Renly Baratheon | Joffrey Baratheon | Richard Horpe | Meryn Trant Kingswood Brotherhood Simon Toyne | Smiling Knight | Wenda the White Fawn Others Criston Cole | Brothers Toyne | Jon Connington The Reach House Tyrell Mace Tyrell | Loras Tyrell | Randyll Tarly House Hightower Ormund Hightower | Otto Hightower | Alicent Hightower | Hobert Hightower | Daeron Targaryen House Florent Axell Florent | Selyse Florent House Peake Unwin Peake | Amaury Peake | Mervyn Flowers | Gormon Peake | Septon Bernard | Tessario Order of Maesters of the Citadel Pycelle | Qyburn Others Jon Roxton | George Graceford | Ben Buttercakes | Alyn Cockshaw | Obara Sand | Bronn Dorne House Martell Morion Martell | Aliandra Martell | Ellaria Sand | Obara Sand | Nymeria Sand | Tyene Sand Others Joffrey Dayne | Wyl of Wyl | Gerold Dayne | Timeon | Vulture Kings Others in Westeros Faith of the Seven Faith Militant | Maidenpool Septa Conspiracy | Septon Bernard Others Pretty Meris | Shagwell | Smiling Knight | The Little Birds | The Rat, the Hawk, and the Pig Essos Free Cities Faceless Men Kindly Man | Waif | Jaqen H'ghar | Alchemist | Mercy Triarchy Craghas Drahar | Sharako Lohar | Racallio Ryndoon House Blackfyre Aegor Rivers | Daemon II Blackfyre | Haegon I Blackfyre | Daemon III Blackfyre | Maelys I Blackfyre Golden Company Aegor Rivers | Maelys I Blackfyre | Harry Strickland | Jon Connington Brave Companions Vargo Hoat | Rorge | Biter | Shagwell | Qyburn | Timeon | Zollo Windblown Tattered Prince | Caggo Corpsekiller | Pretty Meris Second Sons Mero | Ben Plumm | Tyrion Lannister | Jorah Mormont | Kasporio the Cunning | Harwyn Hoare | Aerion Targaryen | Aegor Rivers | Tattered Prince Rhoyne Lady Korra The Sorrows Stone Men Others Belicho Paenymion | Bloodbeard | Daario Naharis | Daenerys Targaryen | Doreah | Illyrio Mopatis | Malaquo Maegyr | Moqorro | Mysaria | Ollo Lophand | Old Man | Tyanna of the Tower | Varys | Viserys Targaryen | Sorcerer | Nymeria Sand | Bianca | Tessario | Saan Family | Band of Nine | The Little Birds Dothraki Sea Dothraki | Moro | Drogo | Jhaqo | Daenerys Targaryen | Caggo Corpsekiller | Mago | Moro | Qotho | Zollo | Qorro | Brozho | Rhalko | Forzho | Wineseller Lhazar Mirri Maz Duur Slaver's Bay Great Masters | Wise Masters | Good Masters | Daenerys Targaryen | Hizdahr zo Loraq | Kraznys mo Nakloz | Grazdan mo Ullhor | Cleon the Great | Malko | Malazza | Oznak zo Pahl | Prendahl na Ghezn | Razdal mo Eraz | Yezzan zo Qaggaz | Skahaz mo Kandaq | Yurkhaz zo Yunzak | Sons of the Harpy | Vala | Grey Worm | Unsullied Qarth Pureborn | Undying Ones | Pyat Pree | Warlocks of Qarth | Xaro Xhoan Daxos | Sorrowful Men Collections of Countries Old Empire of Ghis | Valyrian Freehold | Slaver Alliance Far East Essos Yi Ti Bloodstone Emperor | Lo Bu | Jar Har Asshai and Shadow Lands Melisandre | Shadow Assassins Across the Known World Crew of the Silence Dragons Balerion | The Cannibal | Drogon | Meraxes | Rhaegal | Silverwing | Sunfyre | Vermithor | Vhagar | Viserion Deities Drowned God | Great Other | Horse God | Lion of Night | Many-Faced God | Old Gods | Old Ones | R'hllor | Sea God and Goddess of the Wind | Storm God | | Westeros Beyond the Wall The Others Night King | White Walker Commander | Wights (Chett & Viserion) Giants Mag Mar Tun Doh Weg | Dongo Free Folk Mance Rayder | Tormund Giantsbane | Styr | Rattleshirt | Orell | Craster | Varamyr Sixskins | Harma Dogshead | The Weeper | Alfyn Crowkiller | Ygritte | Hali | Sylas the Grim The Wall Night's Watch Night's King | Rat Cook | Mad Axe | Olyver Bracken | Raymund Mallery | George Graceford | Perkin the Flea | Alliser Thorne | Bowen Marsh | Othell Yarwyck | Janos Slynt | Olly | Karl Tanner | Dirk | Ollo Lophand | Clubfoot Karl | Chett | Rast | Brant | Derek | Stiv | Wallen | Rorge | Biter | Allar Deem Others The Thing that Came in the Night The North House Stark Theon Stark | Cregan Stark | Arya Stark | Tom House Bolton Royce IV Bolton | Roose Bolton | Ramsay Bolton | Locke | Reek | Myranda | Violet | Little Walder Frey | Big Walder Frey | Bastard's Boys | Master torturer | Smalljon Umber House Karstark Rickard Karstark | Arnolf Karstark | Cregan Karstark | Arthor Karstark | Harald Karstark Others Bowen Marsh | Jorah Mormont | Olly The Vale of Arryn House Arryn Lysa Arryn | Marillion | Mord | Mandon Moore House Baelish Petyr Baelish | Oswell Kettleblack Vale Mountain Clans Shagga Others Lyn Corbray | Rast Riverlands House Baelish of Harrenhal Petyr Baelish House Frey Walder Frey | Emmon Frey | Aenys Frey | Walder Rivers | Jared Frey | Hosteen Frey | Symond Frey | Merrett Frey | Raymund Frey | Lothar Frey | Whalen Frey | Benfrey Frey | Ryman Frey | Rhaegar Frey | Big Walder Frey | Little Walder Frey | Edwyn Frey | Black Walder Frey | Tytos Frey | Leslyn Haigh | Harys Haigh House Strong Larys Strong | Alys Rivers | Larys Strong's Prisoners House Bracken Lothar Bracken | Olyver Bracken | Aegor Rivers House Blackwood Samwell Blackwood | Willem Blackwood Brotherhood Without Banners Lady Stoneheart | Lem Lemoncloak | Tom of Sevenstreams | Morgan | Gatins Others Chett | Garse Goodbrook | Lysa Tully | Tommard Heddle | Danelle Lothston | Harren the Red | Gargon Qoherys | Raymund Mallery Iron Islands House Greyjoy Dalton Greyjoy | Dagon Greyjoy | Balon IX Greyjoy | Euron III Greyjoy | Victarion Greyjoy | Aeron Greyjoy | Asha Greyjoy | Yara Greyjoy | Theon Greyjoy | Dagmer Cleftjaw | Black Lorren | Stygg | Drennan | Adrack Humble | Red Oarsman | Lucas Codd | Torwold Browntooth | Harrag | Iron Fleet House Hoare Qhored I Hoare | Hagon Hoare | Harwyn Hoare | Harren Hoare Others Joron I Blacktyde | Urrathon IV Goodbrother | Urron Greyiron | The Shrike Westerlands House Lannister Lann the Clever | Jason Lannister | Johanna Lannister | Tywin Lannister | Cersei Lannister | Jaime Lannister | Tyrion Lannister | Lancel Lannister | Amory Lorch | Ilyn Payne | Preston Greenfield | Shae | Lowell House Clegane Gregor Clegane | Sandor Clegane | Polliver | Rafford | The Tickler | Weasel | Mountain's Men House Reyne Ellyn Reyne | Roger Reyne House Spicer Rolph Spicer | Sybell Spicer Others Alfred Broome | Androw Farman | Othell Yarwyck Crownlands Faith of the Seven High Sparrow | The Shepherd | Septon Bernard | Baelor I Targaryen | Septa Unella | Faith Militant | Lancel Lannister House Targaryen Aegon I Targaryen | Visenya Targaryen | Rhaenys Targaryen | Maegor I Targaryen | Rhaenys Targaryen | Daemon Targaryen | Rhaenyra Targaryen | Aegon II Targaryen | Aemond Targaryen | Daeron Targaryen | Daeron I Targaryen | Baelor I Targaryen | Aegon IV Targaryen | Aerion Targaryen | Aerys II Targaryen | Rhaegar Targaryen | Viserys Targaryen | Daenerys I Targaryen House Baratheon of King's Landing Robert I Baratheon | Joffrey I Baratheon | Cersei Lannister | Janos Slynt | Ilyn Payne | Bronn | Sandor Clegane | Kettleblack Brothers | Catspaw House Baratheon of Dragonstone Stannis Baratheon | Selyse Florent | Melisandre | Axell Florent | Richard Horpe | Clayton Suggs | Salladhor Saan | Shadow Assassins House Blackfyre Daemon I Blackfyre | Daemon II Blackfyre | Haegon I Blackfyre | Aegor Rivers | Alyn Cockshaw | Golden Company Sworn Brotherhood of the Kingsguard Olyver Bracken | Raymund Mallery | Criston Cole | Marston Waters | Amaury Peake | Mervyn Flowers | Jaime Lannister | Boros Blount | Meryn Trant | Mandon Moore | Preston Greenfield | Sandor Clegane | Osmund Kettleblack | Loras Tyrell | Robert Strong Greens Aegon II Targaryen | Alicent Hightower | Otto Hightower | Aemond Targaryen | Daeron Targaryen | Criston Cole | Borros Baratheon | Ormund Hightower | Jason Lannister | Unwin Peake | George Graceford | Jon Roxton | Larys Strong | Larys Strong's prisoners | Alys Rivers | Hugh Hammer | Ulf White | Hobert Hightower | Alfred Broome | Arryk Cargyll | Marston Waters | Perkin the Flea | Luthor Largent | Caltrops Blacks Rhaenyra Targaryen | Daemon Targaryen | Rhaenys Targaryen | Cregan Stark | Mysaria | Hugh Hammer | Ulf White | Luthor Largent | Bartimos Celtigar | Alfred Broome | Dalton Greyjoy | Samwell Blackwood | Willem Blackwood | Blood and Cheese House Kettleblack Oswell Kettleblack | Osmund Kettleblack | Osfryd Kettleblack | Osney Kettleblack City Watch of King's Landing Daemon Targaryen | Janos Slynt | Allar Deem | Bronn | Osfryd Kettleblack | Blood | Luthor Largent | Perkin the Flea Alchemists' Guild Rossart | Garigus | Belis Kingswood Brotherhood Simon Toyne | Smiling Knight | Wenda the White Fawn Others Alliser Thorne | Rorge | Biter | Arryk Cargyll | Bartimos Celtigar | Denys Darklyn | Hugh Hammer | Ulf White | Marston Waters | Qarl Correy | Karl Tanner | Olyvar | King's Landing Rioters Stormlands House Baratheon Orys Baratheon | Borys Baratheon | Borros Baratheon | Robert I Baratheon | Stannis Baratheon | Renly Baratheon | Joffrey Baratheon | Richard Horpe | Meryn Trant Kingswood Brotherhood Simon Toyne | Smiling Knight | Wenda the White Fawn Others Criston Cole | Brothers Toyne | Jon Connington The Reach House Tyrell Mace Tyrell | Loras Tyrell | Randyll Tarly House Hightower Ormund Hightower | Otto Hightower | Alicent Hightower | Hobert Hightower | Daeron Targaryen House Florent Axell Florent | Selyse Florent House Peake Unwin Peake | Amaury Peake | Mervyn Flowers | Gormon Peake | Septon Bernard | Tessario Order of Maesters of the Citadel Pycelle | Qyburn Others Jon Roxton | George Graceford | Ben Buttercakes | Alyn Cockshaw | Obara Sand | Bronn Dorne House Martell Morion Martell | Aliandra Martell | Ellaria Sand | Obara Sand | Nymeria Sand | Tyene Sand Others Joffrey Dayne | Wyl of Wyl | Gerold Dayne | Timeon | Vulture Kings Others in Westeros Faith of the Seven Faith Militant | Maidenpool Septa Conspiracy | Septon Bernard Others Pretty Meris | Shagwell | Smiling Knight | The Little Birds | The Rat, the Hawk, and the Pig Essos Free Cities Faceless Men Kindly Man | Waif | Jaqen H'ghar | Alchemist | Mercy Triarchy Craghas Drahar | Sharako Lohar | Racallio Ryndoon House Blackfyre Aegor Rivers | Daemon II Blackfyre | Haegon I Blackfyre | Daemon III Blackfyre | Maelys I Blackfyre Golden Company Aegor Rivers | Maelys I Blackfyre | Harry Strickland | Jon Connington Brave Companions Vargo Hoat | Rorge | Biter | Shagwell | Qyburn | Timeon | Zollo Windblown Tattered Prince | Caggo Corpsekiller | Pretty Meris Second Sons Mero | Ben Plumm | Tyrion Lannister | Jorah Mormont | Kasporio the Cunning | Harwyn Hoare | Aerion Targaryen | Aegor Rivers | Tattered Prince Rhoyne Lady Korra The Sorrows Stone Men Others Belicho Paenymion | Bloodbeard | Daario Naharis | Daenerys Targaryen | Doreah | Illyrio Mopatis | Malaquo Maegyr | Moqorro | Mysaria | Ollo Lophand | Old Man | Tyanna of the Tower | Varys | Viserys Targaryen | Sorcerer | Nymeria Sand | Bianca | Tessario | Saan Family | Band of Nine | The Little Birds Dothraki Sea Dothraki | Moro | Drogo | Jhaqo | Daenerys Targaryen | Caggo Corpsekiller | Mago | Moro | Qotho | Zollo | Qorro | Brozho | Rhalko | Forzho | Wineseller Lhazar Mirri Maz Duur Slaver's Bay Great Masters | Wise Masters | Good Masters | Daenerys Targaryen | Hizdahr zo Loraq | Kraznys mo Nakloz | Grazdan mo Ullhor | Cleon the Great | Malko | Malazza | Oznak zo Pahl | Prendahl na Ghezn | Razdal mo Eraz | Yezzan zo Qaggaz | Skahaz mo Kandaq | Yurkhaz zo Yunzak | Sons of the Harpy | Vala | Grey Worm | Unsullied Qarth Pureborn | Undying Ones | Pyat Pree | Warlocks of Qarth | Xaro Xhoan Daxos | Sorrowful Men Collections of Countries Old Empire of Ghis | Valyrian Freehold | Slaver Alliance Far East Essos Yi Ti Bloodstone Emperor | Lo Bu | Jar Har Asshai and Shadow Lands Melisandre | Shadow Assassins Across the Known World Crew of the Silence Dragons Balerion | The Cannibal | Drogon | Meraxes | Rhaegal | Silverwing | Sunfyre | Vermithor | Vhagar | Viserion Deities Drowned God | Great Other | Horse God | Lion of Night | Many-Faced God | Old Gods | Old Ones | R'hllor | Sea God and Goddess of the Wind | Storm God Fandom logo Explore properties Follow Us Overview Community Advertise Fandom Apps
188896
https://infinitylearn.com/question-answer/if-chord-joiningt1andt2on-the-parabolay24axis-a-fo-62db9a5ae202f3379dbbe503
If chord joining t1 and t2 on the parabola y2=4ax is a focal chord then t1t2= courses study material results more sign in sign in Home Question AI Mentor Check Your IQ Free Expert Demo Try Test Courses Dropper NEET CourseDropper JEE CourseClass - 12 NEET CourseClass - 12 JEE CourseClass - 11 NEET CourseClass - 11 JEE CourseClass - 10 Foundation NEET CourseClass - 10 Foundation JEE CourseClass - 10 CBSE CourseClass - 9 Foundation NEET CourseClass - 9 Foundation JEE CourseClass -9 CBSE CourseClass - 8 CBSE CourseClass - 7 CBSE CourseClass - 6 CBSE Course Offline Centres Q. If chord joining t 1 and t 2 on the parabola y 2=4 a x is a focal chord then t 1 t 2= see full answer High-Paying Jobs That Even AI Can’t Replace — Through JEE/NEET 🎯 Hear from the experts why preparing for JEE/NEET today sets you up for future-proof, high-income careers tomorrow. An Intiative by Sri Chaitanya Book Now a 2 b 1 c 0 d −1 answer is C. (Unlock A.I Detailed Solution for FREE) Best Courses for You JEE NEET Foundation JEE Foundation NEET CBSE Detailed Solution The equation of the chord joining t 1 and t 2 on the parabola y 2=4 a x y(t 1+t 2)=2 x+2 a t 1 t 2⋯⋯(1) Given equation (1) passes through focus(a,0) ∴O=2 a+2 a t 1 t 2⇒t 1 t 2=−1 Watch 3-min video & get full concept clarity Similar Questions Q1. The graph of the equation 2 x(4 x 2−1)=3 y(9 y 2−1)is the union of Moderate view Q2. The equation of parabola whose vertex(2,5)and focus(2,2)is Moderate view View more courses All Packages Grade 10 JEE Valid Till:31 Mar 2026 Live Classes + Books For 2025 JEE Foundation Grade 10 Live Full Course (2025-2026) Key Features: Foundation Grade 10 Live Full ... Infinity Learn Test Series Mock Tests Key Features: Foundation Grade 10 Live Full Course Infinity Learn Test Series Mock Tests Self Learn - Learn, Practice and Revise 24X7 Doubt Resolution Foundation Books for PCM Foundation Digital Books Physics Foundation Digital Books Chemistry Foundation Digital Books Maths Parent Mentor Meetings see all ₹ 3001 Saved ₹ 59999 ₹ 63000 know more enrol now Valid Till:31 May 2028 Live Classes + Books For 2025 JEE Foundation Grade 10 to 12 Live Full Course (2025-2028) Key Features: Foundation Grade 10 Live Full ... Infinity Learn Test Series Mock Tests Key Features: Foundation Grade 10 Live Full Course Infinity Learn Test Series Mock Tests Self Learn - Learn, Practice and Revise 24X7 Doubt Resolution Foundation Books for PCM Foundation Digital Books Physics Foundation Digital Books Chemistry Foundation Digital Books Maths Parent Mentor Meetings see all ₹ 27001 Saved ₹ 161999 ₹ 189000 know more enrol now Related Blogs Get ₹ 1 crore worth scholarship for JEE / NEET / Foundation Result Producing online coaching for JEE/NEET of south India Largest All India Test Series for JEE/NEET Best Online Course for IIT JEE Best Online Course for NEET Best Online Course for CBSE Ready to Test Your Skills? Check your Performance Today with our Free Mock Test used by Toppers! Take Free Test Our Popular Offline Centres Jodhpur, Banar Road Sri Chaitanya Academy, Ramzan ji ka Hatha, Banar Road, Jodhpur 9785492240 get directions Book a Visit Jodhpur, PWD Colony Sri Chaitanya Academy, PWD Colony, Jodhpur 7568749024 get directions Book a Visit Sikar, Piprali Road Sri Chaitanya Academy, Shree Ganesh Tower, Piprali Road, Sikar 9281425119 get directions Book a Visit View More Centres related videos Focal Chord & its Properties watch for FREE Equation of Parabola ( different orientations) and Latus Rectum watch for FREE Equation of Parabola ( Part 2) watch for FREE Problem based on equation of Parabola watch for FREE Learn in Depth with Masterclass Mathematics Tangent & Normal To The H ... 2 hr by Akshay Arora Mathematics Ellipse-2 2 hr by Akshay Arora Mathematics Equation of Hyperbola 2 hr by Akshay Arora Mathematics Hyperbola-1 1 hr by Bharath N Mathematics Conic Section -Practice 1 hr by Akshay Arora Get Expert Academic Guidance – Connect with a Counselor Today! Your Name Grade / Class Exam you want to excel in Phone No Send OTP Related Blogs Get ₹ 1 crore worth scholarship for JEE / NEET / Foundation Result Producing online coaching for JEE/NEET of south India Largest All India Test Series for JEE/NEET Best Online Course for IIT JEE Best Online Course for NEET Best Online Course for CBSE 1800-570-6262(customer support) 7996668865(sales team) support@infinitylearn.com Head Office: Infinity Towers, N Convention Rd, Surya Enclave, Siddhi Vinayak Nagar, Kothaguda, Hyderabad, Telangana 500084. Corporate Office: 9th Floor, Shilpitha Tech Park, 3 & 55/4, Devarabisanahalli, Bellandur, Bengaluru, Karnataka 560103 Company About us our team Careers Life at Infinity Learn IL in the news Blogs become a Teacher courses JEE Online Course NEET Online Course Foundation Online Course CBSE Online Course HOTS Online Course All India Test Series Book Series Support Privacy Policy Refund Policy grievances Terms & Conditions Supplier Terms Supplier Code of Conduct Posh more AINA - AI Mentor Sri Chaitanya Academy Score scholarships YT Infinity Learn JEE YT - Infinity Learn NEET YT Infinity Learn 9&10 One Stop Solutions JEE Main One Stop Solutions JEE Advanced One Stop Solutions NEET One Stop Solutions CBSE One Stop Solutions Rank Predictor JEE Main Rank College Predictor NEET Rank Predictor JEE Main BITSAT Score Predictor Free study material NCERT SOLUTIONS NCERT Solutions for Class 12 NCERT Solutions for Class 11 NCERT Solutions for Class 10 NCERT Solutions for Class 9 NCERT Solutions for Class 8 NCERT Solutions for Class 7 NCERT Solutions for Class 6 CBSE BOARD CBSE Class 12 Board Exam CBSE Class 11 Board Exam CBSE Class 10 Board Exam CBSE Class 9 Board Exam CBSE Class 8 Board Exam CBSE Class 7 Board Exam CBSE Class 6 Board Exam MULTIPLE CHOICE QUESTIONS CBSE Class 12 MCQs CBSE Class 11 MCQs CBSE Class 10 MCQs CBSE Class 9 MCQs CBSE Class 8 MCQs CBSE Class 7 MCQs CBSE Class 6 MCQs WORKSHEETS CBSE Worksheet for Class 12 CBSE Worksheet for Class 11 CBSE Worksheet for Class 10 CBSE Worksheet for Class 9 CBSE Worksheet for Class 8 CBSE Worksheet for Class 7 CBSE Worksheet for Class 6 STUDY MATERIALS GK Questions English General Topics Biography ACADEMIC ARTICLES Maths Physics Chemistry Biology REFERENCE BOOKS RD Sharma Solutions Lakhmir Singh Solutions NCERT Solutions NCERT Solutions for Class 12 NCERT Solutions for Class 11 NCERT Solutions for Class 10 NCERT Solutions for Class 9 NCERT Solutions for Class 8 NCERT Solutions for Class 7 NCERT Solutions for Class 6 CBSE Board CBSE Class 12 Board Exam CBSE Class 11 Board Exam CBSE Class 10 Board Exam CBSE Class 9 Board Exam CBSE Class 8 Board Exam CBSE Class 7 Board Exam CBSE Class 6 Board Exam Multiple Choice Questions CBSE Class 12 MCQs CBSE Class 11 MCQs CBSE Class 10 MCQs CBSE Class 9 MCQs CBSE Class 8 MCQs CBSE Class 7 MCQs CBSE Class 6 MCQs Worksheets CBSE Worksheet for Class 12 CBSE Worksheet for Class 11 CBSE Worksheet for Class 10 CBSE Worksheet for Class 9 CBSE Worksheet for Class 8 CBSE Worksheet for Class 7 CBSE Worksheet for Class 6 Study Materials GK Questions English General Topics Biography Academic Articles Maths Physics Chemistry Biology Reference Books RD Sharma Solutions Lakhmir Singh Solutions © Rankguru Technology Solutions Private Limited. All Rights Reserved follow us Not sure what to do in the future?Don’t worry! We have a FREE career guidance session just for you! book now 0 × Loading...
188897
https://ocw.mit.edu/courses/10-626-electrochemical-energy-systems-spring-2014/pages/lecture-notes/
Browse Course Material Course Info Instructor Prof. Martin Bazant Departments Chemical Engineering As Taught In Spring 2014 Level Graduate Topics Engineering Chemical Engineering Materials Science and Engineering Electronic Materials Science Chemistry Analytical Chemistry Physical Chemistry Learning Resource Types assignment Problem Sets grading Exams with Solutions notes Lecture Notes co_present Instructor Insights Download Course search GIVE NOW about ocw help & faqs contact us 10.626 | Spring 2014 | Graduate Electrochemical Energy Systems Lecture Notes Topics covered in lectures in 2014 are listed below. In some cases, links are given to new lecture notes by student scribes. All scribed lecture notes are used with the permission of the anonymous student author. The recommended reading refers to the lectures notes and exam solutions from previous years or to the books listed below. Lecture notes from previous years are also found in the study materials section. [Newman] = Newman, John, and Karen E. Thomas-Alyea. Electrochemical Systems. 3rd ed. Wiley-Interscience, 2004. ISBN: 9780471477563. [Preview with Google Books] [Bard] = Bard, Allen J., and Larry R. Faulkner. Electrochemical Methods: Fundamentals and Applications. 2nd ed. Wiley, 2000. ISBN: 9780471043720. [O’ Hayre] = O’ Hayre, Ryan, Suk-Won Cha, et al. Fuel Cell Fundamentals. 2nd ed. Wiley, 2009. ISBN: 9780470258439. [Huggins] = Huggins, Robert A. Advanced Batteries: Materials Science Aspects. Springer, 2008. ISBN: 9780387764238. [Preview with Google Books] | SES # | 2014 TOPICS AND LECTURE NOTES | 2014 READINGS | --- | I. Introduction | | 1 | Syllabus, Overview | None | | 2 | Basic Physics of Galvanic Cells, Electrochemical Energy Conversion (PDF) | 2011 Lecture 1: Basic Physics of Galvanic Cells (PDF) 2011 Lecture 2: Electrochemical Energy Conversion (PDF) [Newman] Chapter 1. [O’Hayre] Chapter 2. | | 3 | Electrochemical Energy Storage (PDF) | 2011 Lecture 3: Electrochemical Energy Storage (PDF) [Huggins] Chapter 1. | | II. Circuit Models | | 4 | Equivalent Circuit Dynamics | 2011 Lecture 4: Dynamics of Equivalent Circuits (PDF) | | 5 | Impedance I | 2011 Lecture 5: Impedance spectroscopy (PDF - 1.6MB) [Bard] Chapter 10. [O’Hayre] Chapter 7, sec. 3.4. | | 6-7 | Impedance II & Impedance III | 2011 Lecture 6: Impedance of Electrode (PDF) | | III. Thermodynamics | | 8 | Statistical Thermodynamics, Regular Solution Model | 2011 Lecture 7: Statistical Thermodynamics (PDF) | | 9 | Nernst Equation, Open Circuit Voltage | 2011 Lecture 8: The Nernst Equation (PDF) [Newman] Chapter 2. | | 10 | Fuel Cells and Batteries | 2011 Lecture 9: Fuel Cells and Lead-Acid Batteries (PDF) [O’Hayre] Chapter 2. | | 11 | Pourbaix Diagram (PDF) | 2011 Lecture 9: Fuel Cells and Lead-Acid Batteries (PDF) Prentice, Geoffrey A. Chapter 3 in Electrochemical Engineering Principles. Prentice Hall, 1990. ISBN: 9780132490382. | | 12 | Metal Acid Batteries, Lemon Battery Demo (PDF) | None | | 13 | Li-ion Batteries, Pseudocapacitance (PDF - 1.3MB) | 2011 Lecture 10: Li-ion Batteries (PDF) 2011 Lecture 37: Pseudocapacitors and Batterie (PDF - 2.1MB) [Huggins] Chapters 2 and 6. | | 14 | Ideal Solution Model, Linear Sweep Voltammetry (PDF) | 2011 Lecture 11: Reconstitution Electrodes (PDF) [Huggins] Chapters 3 and 6. | | 15 | Regular Solution Model, Phase Separation | 2011 Lecture 11: Reconstitution Electrodes (PDF) | | IV. Kinetics | | 16 | Reactions in Concentrated Solutions | 2011 Lecture 14: Faradaic Reactions in Concentrated Solutions (PDF) Bazant, M. Z. “Theory of Chemical Kinetics and Charge Transfer Based on Nonequilibrium Thermodynamics.” Accounts of Chemical Research 46, no. 5 (2013): 1146–47. | | 17 | Faradaic Reactions | 2011 Lecture 12: Faradaic Reactions in Dilute Solutions (PDF) 2011 Lecture 13: Butler-Volmer Equation (PDF) Bazant, M. Z. “Theory of Chemical Kinetics and Charge Transfer Based on Nonequilibrium Thermodynamics.” Accounts of Chemical Research 46, no. 5 (2013): 1148–49. | | 18 | Butler-Volmer Equation | [O’Hayre] Chapter 3. [Newman] Chapter 8. [Bard] Chapter 3. | | 19 | Electrocatalysis (PDF) | 2011 Lecture 15: Ion Adsorption and Intercalation (PDF) Bazant, M. Z. “Theory of Chemical Kinetics and Charge Transfer Based on Nonequilibrium Thermodynamics.” Accounts of Chemical Research 46, no. 5 (2013): 1155–57. | | 20-21 | Electrochemical Phase Transformations (PDF - 1.5MB) | Bai, P., D. A. Cogswell, et al. “Suppression of Phase Separation in LiFePO4 Nanoparticles During Battery Discharge.” Nano Letters 11, no. 11 (2011): 4890–96. | | 22 | Homogeneous Charge Transfer (PDF) | [Bard] Chapter 3, sec. 6. | | 23 | Heterogeneous Charge Transfer (PDF) | Bazant, M. Z. “Theory of Chemical Kinetics and Charge Transfer Based on Nonequilibrium Thermodynamics.” Accounts of Chemical Research 46, no. 5 (2013): 1149–50. | | 24 | Charge Transfer at Metal Electrodes (PDF) | Bai, P., and M. Z. Bazant. “Charge Transfer Kinetics at the Solid/Solid Interface in Porous Electrodes.” Nature Communications 5, no. 3585 (2014). Zeng, Y., R. Smith, P. Bai, and M.Z. Bazant. “Simple Formula for Marcus-Hush-Chidsey Kinetics.” J. Electroanal. Chem. 735(2014): 77-83. | | V. Transport Phenomena | | 25 | Concentration Polarization | 2011 Lecture 16: Concentration Polarization (PDF) [O’Hayre] Chapter 5.2. | | 26 | Transient Diffusion | 2011 Lecture 19: Transient Diffusion (PDF) [Bard] Chapters 7 and 8. | | 27 | Warburg Impedance | 2011 Lecture 20: Warburg Impedance (PDF) [Bard] Chapter 10. | | 28 | Forced Convection I (PDF) | 2011 Lecture 17: Forced Convection in Fuel Cells (I) (PDF) [O’Hayre] Chapter 5.3. [Newman] Chapter 17. Deen, William M. Analysis of Transport Phenomena. Oxford University Press, 1998. ISBN: 9780195084948. Braff, W. A., C. R. Buie, et al. “Boundary Layer Analysis of Electrochemical Cells.” Journal of the Electrochemical Society 160, no. 11 (2013): A2056–63. | | 29 | Forced Convection II (PDF) | Braff, W. A., C. R. Buie, et al. “Boundary Layer Analysis of Electrochemical Cells.” Journal of the Electrochemical Society 160, no. 11 (2013): A2056–63. | | 30 | Forced Convection III | 2011 Lecture 18: Forced Convection in Fuel Cells II (PDF) | | 31 | Transport in Solids | 2011 Lecture 21: Solids and Concentrated Solutions (PDF) Bazant, M. Z. “Theory of Chemical Kinetics and Charge Transfer Based on Nonequilibrium Thermodynamics.” Accounts of Chemical Research 46, no. 5 (2013): 1147. | | 32 | Concentrated Solutions, Bulk Electrolytes | 2011 Lecture 22: Transport in Bulk Electrolytes (PDF) [Newman] Chapter 11. | | 33 | Homogeneous Reaction-diffusion (PDF) | None | | 34 | Ion Concentration Polarization | 2011 Lecture 22: Transport in Bulk Electrolytes (PDF) 2011 Lecture 23: Ion Concentration Polarization (PDF) [Newman] Chapter 11. [Newman] Chapter 4. | | 35 | Double Layers, Supercapacitors | 2011 Lecture 24: Diffuse Charge in Electrolyte (PDF) 2011 Lecture 25: Diffuse Double Layer Structure (PDF) [Newman] Chapter 7. | | 36 | Transport in Porous Media | 2011 Lecture 32: Percolation (PDF) 2011 Lecture 33: Macroscopic Conductivity of Composite (PDF - 1.5MB) 2011 Lecture 34: Transport in Porous Media (PDF - 1.5MB) | | 37 | Scaling Analysis of Energy Storage | 2012 Lecture 36–37: Scaling Analysis of Energy Storage by Porous Electrodes (PDF) | | 38 | Porous Electrodes (Overview) | 2011 Lecture 35: Porous Electrodes (I. Supercapacitors) (PDF - 1.1MB) 2011 Lecture 36: Electrochemical Supercapacitor (PDF - 1.3MB) 2011 Lecture 37: Pseudocapacitors and Batteries (PDF - 2.1MB) [Newman] Chapter 22. Ferguson, T. R., and M. Z. Bazant. “Nonequilibrium Thermodynamics of Porous Electrodes.” Journal Electrochemical Society 159, no. 12 (2012): A1967–A1985. | Course Info Instructor Prof. Martin Bazant Departments Chemical Engineering As Taught In Spring 2014 Level Graduate Topics Engineering Chemical Engineering Materials Science and Engineering Electronic Materials Science Chemistry Analytical Chemistry Physical Chemistry Learning Resource Types assignment Problem Sets grading Exams with Solutions notes Lecture Notes co_present Instructor Insights Download Course Over 2,500 courses & materials Freely sharing knowledge with learners and educators around the world. Learn more © 2001–2025 Massachusetts Institute of Technology Creative Commons License Terms and Conditions Proud member of: © 2001–2025 Massachusetts Institute of Technology You are leaving MIT OpenCourseWare Please be advised that external sites may have terms and conditions, including license rights, that differ from ours. MIT OCW is not responsible for any content on third party sites, nor does a link suggest an endorsement of those sites and/or their content. Continue
188898
https://chemistry-worksheets.s3.amazonaws.com/chem-1-vol-5/Chemistry+Tutor+-+Vol+5+-+Worksheet+13+-+Finding+Molecular+Mass+using+the+Ideal+Gas+Law.pdf
1 © MathTutorDVD.com Chemistry 1 Volume 5 Worksheet 13 Finding Molar Mass Using the Ideal Gas Law 2 © MathTutorDVD.com 1. What is the molar mass of a 5.67 g sample of a gas if it occupies 25.6 L at 276 K and 1.5 atm? 2. What is the molar mass of a gas if 7.8 g of it occupies 6.7 L at 75oC at 790 mmHg? 3 © MathTutorDVD.com 3. A gas occupies a volume of 788 mL at 67oC and 789 Torr. If the mass of the gas is 4.5 g, what is its molar mass? 4. At a pressure of 74,000 Pa, a gas occupies a volume of 4.76 L at 298 K. If the mass of the sample is 1.54 g, what is the molar mass of the gas? 4 © MathTutorDVD.com 5. At 305 K and 1.53 atm, a 515 mg sample of a gas occupies a volume of 1980 mL. What is the molar mass of this sample? 6. A 5.10 g sample of a gas occupies a volume of 52.3 L at 311 K and 1.23 atm. a. What is the molar mass of this gas? b. What is the most likely identity of this gas? 5 © MathTutorDVD.com 7. An 8.93 g sample of a gas occupies a volume of 0.970 L at 1.75 atm and 304 K. What is the identity of this gaseous element? 6 © MathTutorDVD.com 8. A mixture of two diatomic gases occupies a combined volume of 2.68 L at 298 K. The combined mass of the two gases is 5.78 g. One of the gases is N2, which occupies a volume of 2.45 L with a mass of 4.65 g. The other gas has a pressure of 1.67 atm. What is the identity of the other gas? 7 © MathTutorDVD.com Answer Key 1. What is the molar mass of a 5.67 g sample of a gas if it occupies 25.6 L at 276 K and 1.5 atm? Step 1: P = 1.5 atm V = 25.6 L n = R = 0.08206 L atm/mol K T = 276 K Step 2: (1.5 atm)(25.6 L) = n(0.08206 L atm/mol K)(276 K) n = 1.7 mol Step 3: Molar mass = grams / moles Molar mass = 5.67 g/1.7mol Molar mass = 3.3 g/mol Correct answer: 3.3 g/mol 8 © MathTutorDVD.com 2. What is the molar mass of a gas if 7.8 g of it occupies 6.7 L at 75oC at 790 mmHg? Step 1: TK = 75 + 273.15K TK = 348.15 K Step 2: 790 mmHg 1 atm = 1.04 atm 760 mmHg Step 3: P = 1.04 atm V = 6.7 L n = R = 0.08206 L atm/mol K T = 348.15 K Step 4: (1.04 atm)(6.7 L) = n(0.08206 L atm/mol K)(348.15 K) n = 0.24 mol Step 5: Molar mass = 7.8 g/0.24 mol Molar mass = 33 g/mol Correct answer: 33 g/mol 9 © MathTutorDVD.com 3. A gas occupies a volume of 788 mL at 67oC and 789 Torr. If the mass of the gas is 4.5 g, what is its molar mass? Step 1: TK = 67 + 273.15K TK = 340.15 K Step 2: 789 Torr 1 atm = 1.04 atm 760 Torr Step 3: 788 mL 1 L = 0.788 L 1,000 mL Step 4: P = 1.04 atm V = 0.788 L n = R = 0.08206 L atm/mol K T = 340.15 K Step 5: (1.04 atm)(0.788 L) = n(0.08206 L atm/mol K)(340.15 K) n = 0.029 mol Step 6: Molar mass = 4.5 g/0.029 mol Molar mass = 155 g/mol Correct answer: 155 g/mol 10 © MathTutorDVD.com 4. At a pressure of 74,000 Pa, a gas occupies a volume of 4.76 L at 298 K. If the mass of the sample is 1.54 g, what is the molar mass of the gas? Step 1: Since the pressure is in Pa, we’ll use 8.314 Pa m3/mol K as R. This means we need to convert L to m3. 4.76 L 1 m3 = 0.00476 m3 1000 L Step 2: P = 74,000 Pa V = 0.00476 m3 n = R = 8.314 Pa m3/mol K T = 298 K Step 3: (74,000 Pa)(0.00476 m3) = n(8.314 Pa m3/mol K)(298 K) n = 0.14 mol Step 4: Molar mass = 1.54 g/0.14 mol Molar mass = 11 g/mol Correct answer: 11 g/mol 11 © MathTutorDVD.com 5. At 305 K and 1.53 atm, a 515 mg sample of a gas occupies a volume of 1980 mL. What is the molar mass of this sample? Step 1: Make all necessary unit conversions. 1980 mL 1 L = 1.98 L 1,000 mL 515 mg 1 g = 0.515 g 1,000 mg Step 2: P = 1.53 atm V = 1.98 L n = R = 0.08206 L atm/mol K T = 305 K Step 3: (1.53 atm)(1.98 L) = n(0.08206 L atm/mol K)(305 K) n = 0.12 mol Step 4: Molar mass = 0.515 g/0.12 mol Molar mass = 4.3 g/mol Correct answer: 4.3 g/mol 12 © MathTutorDVD.com 6. A 5.10 g sample of a gas occupies a volume of 52.3 L at 311 K and 1.23 atm. a. What is the molar mass of this gas? Step 1: P = 1.23 atm V = 52.3 L n = R = 0.08206 L atm/mol K T = 311 K Step 2: (1.23 atm)(52.3 L) = n(0.08206 L atm/mol K)(311 K) n = 2.52 mol Step 3: Molar mass = 5.10 g/2.52 mol Molar mass = 2.02 g/mol Correct answer: 2.02 g/mol b. What is the most likely identity of this gas? There isn’t an element on the periodic table with a mass of 2.02. Remember, however, that hydrogen (H) exists as a diatomic molecule, H2, which has a molar mass of 2.02 g/mol. Correct answer: H2 13 © MathTutorDVD.com 7. An 8.93 g sample of a gas occupies a volume of 0.970 L at 1.75 atm and 304 K. What is the identity of this gaseous element? Step 1: P = 1.75 atm V = 0.970 L n = R = 0.08206 L atm/mol K T = 304 K Step 2: (1.75 atm)(0.970 L) = n(0.08206 L atm/mol K)(304 K) n = 0.0680 mol Step 3: Molar mass = 8.93 g/0.0680 mol Molar mass = 131 g/mol Step 4: The closest element on the periodic table to the calculated mass is Xe (131.29 g/mol). Correct answer: Xe 14 © MathTutorDVD.com 8. A mixture of two diatomic gases occupies a combined volume of 2.68 L at 298 K. The combined mass of the two gases is 5.78 g. One of the gases is N2, which occupies a volume of 2.45 L with a mass of 4.65 g. The other gas has a pressure of 1.67 atm. What is the identity of the other gas? Step 1: Known values for the unknown gas: P = 1.67 atm V = 2.68 L – 2.45 L = 0.23 L n = R = 0.08206 L atm/mol K T = 298 K Mass = 5.78 g – 4.65 g = 1.13 g Step 2: (1.67 atm)(0.23 L) = n(0.08206 L atm/mol K)(298 K) n = 0.016 mol unknown gas Step 3: Molar mass = 1.13 g/0.016 mol Molar mass = 71 g/mol The most likely identity of the gas is Cl2, which has a mass of 70.91 g/mol. Correct answer: Cl2
188899
https://sites.calvin.edu/scofield/courses/m343/F15/handouts/binomialTestPower.pdf
Power of a Binomial Test T.L. Scofield 9/23/2015 Rejection region for 100 coin flips From the command qbinom(.025, 100, .5) ## 40 we learn that, for a binomial random variable X ∼Binom(100, .5), the cumulative probability up to but not including X = 40 is 0.025. Actually, that is not quite true, since pbinom(40, 100, .5) ## 0.02844397 pbinom(39, 100, .5) ## 0.0176001 which shows P(X ≤39) = 0.0176, while P(X ≤40) = 0.0284; we cannot hit 0.025 exactly. Since, the two-tailed area P(X ≤39 or X ≥61) = 2 · P(X ≤39) . = 0.0352, while the two-tailed area P(X ≤40 or X ≥60) = 2 · P(X ≤39) . = 0.05688, the former is the appropriate rejection region for an hypothesis test involving the count of heads in 100 flips from a coin with hypotheses H0 : π = 0.5, Ha : π ̸= 0.5 and significance level α = 0.05. We display the null distribution along with the rejection region in red: plotDist("binom", params=c(100, .5), col=c("red","forestgreen"), groups=abs(x-50) <= 10) 1 0.00 0.02 0.04 0.06 0.08 40 50 60 Computing β, the probability of Type II Error Suppose our coin actually has a probability of landing “heads” equaling 0.75. Then, counter to what is hypothesized in H0, X ∼Binom(100, 0.75). We overlay this distribution (displayed in gray) with the null distribution. plotDist("binom", params=c(100, .5), col=c("red","forestgreen"), groups=abs(x-50) <= 10, xlim=c(30,80), ylim=c(0,0.1)) plotDist("binom", params=c(100, .75), col="gray60", add=TRUE) 0.02 0.04 0.06 0.08 40 50 60 70 The probability of making a Type II error, β, should be small, as the likelihood of values from our coin (with πa = 0.75) falling in the green region (where the null hypothesis is not rejected) appears to be small. We can find its actual value with commands like sum(dbinom(40:60, 100, 0.75)) ## 0.0006865922 2 or pbinom(60, 100, .75) - pbinom(39, 100, .75) ## 0.0006865922 Now, if our coin has a probability of “heads” equaling 0.55, the likelihood of Type II error should rise. The gray distribution (corresponding to how the coin actually behaves) has a lot more of its probability lying inside the nonrejection region. plotDist("binom", params=c(100, .5), col=c("red","forestgreen"), groups=abs(x-50) <= 10, xlim=c(30,80), ylim=c(0,0.1)) plotDist("binom", params=c(100, .55), col="gray60", add=TRUE) 0.02 0.04 0.06 0.08 40 50 60 70 We compute β as before, seeing (as predicted) it is much larger than before. pbinom(60, 100, .55) - pbinom(39, 100, .55) ## 0.8648077 So, β can only be calculated when we make a presumption about πa, the probability our coin produces a “head”. Not only does its value depend on how far away the true value of π is from what is hypothesized, but it also depends on the choice of significance level α. Power Power is defined as the probability a false null hypothesis is rejected. So power = 1 −P(not rejecting a false H0) = 1 −β. Like β, it relies on α and knowledge of πa, making it difficult to calculate. We may illustrate how the power of a binomial test changes as πa changes. 3 piAlt = seq(0, 1, .02) myBeta = pbinom(60, 100, piAlt) - pbinom(39, 100, piAlt) xyplot(1-myBeta ~ piAlt, type="l", xlab="probability of success", ylab="Power") probability of success Power 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 You can, in fact, increase the power of a binomial test at any fixed value of πa and α by increasing the sample size n. Our next plot gives power for different choices of n, assuming that πa = 0.55 and α = 0.05. enn = 1:2000 critical = qbinom(.025, enn, .5) beta = pbinom(enn-critical,enn,.55) - pbinom(critical-1,enn,.55) xyplot(1-beta ~ enn, type="l", lwd=0.5, xlab="n", ylab="power") n power 0.0 0.2 0.4 0.6 0.8 1.0 0 500 1000 1500 2000 4